text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Fellow Monks, I offer this meditation in light of my recent journies. I've taken the oft given advice to explore other languages. I have lashed my leash to Java and made big rocks into little rocks, and the little rocks into sand. On the side I played played with Squeak with which I cemented the sand back into a big rock. So I continued my search for the right tool for a very large personal itch that needs vast quantities of reusable prototypes. This hunt lead me to Ruby. In this poll post dated October 27th kapper stated the question, "I am quite suprised that nobody has mentioned Ruby?". Which recieved very little attention maybe because the thread Ruby broached the topic already and it too got little attention. Anyways... Yukihiro Matsumoto, a.k.a "Matz" (Ruby's creator) makes this (shamelessly out of context) statement in the forward of the book Programming Ruby (Online Version ), "Then I remembered my old dream, and was toying with it at work. But gradually it grew to be a tool good enough to replace Perl". That sure put me on the defensive from page xxi. Now after my initial tinkering with it and reading the first 100 pages or so I'm positive this is definately much more than just another Parrot and in fact I've got to say that looking at a singleton in Ruby fits the way my brain works far easier than by looking at a singleton in java, c++, smalltalk, or Perl. (I do hate that new is a mandantory constructor though -- I really like that about Perl OO). sub instance { my $class = shift; no strict 'refs'; my $instance = \${ "$class\::_instance" }; defined $$instance ? $$instance : ($$instance = $class->_new_instance(@_)); } [download] class Logger private_class_method :new @@logger = nil def Logger.create @@logger = new unless @@logger @@logger end end [download] IMHO trying to get Ruby to replace Perl is not the correct way to attract prospective users though. I think Perl will continue to evolve as the most colorful code with which an artist can apply his paint because Perl is as much culture and resources as it is code. Ruby is very clean to work with (So far) and its OO implementation seems to be much cleaner than even Java's. For me the real question always comes to scale, but by scale I imply code maintenance and support equally with performance. As most of the usage data is on Japaneese pages (and I'm really lazy) I don't have any data to look at on that front. So the jury is still out -- though this gem has definately caught my eye. (I fear I am becoming a shameless language slut. I don't think I want to find a language good enough for my project -- then I'd have to work on it.) I do think they might want to edit that forward though. It's a tad bold for such a young language that has yet to feel global pressures of popularity and evolution. However Ruby is looking very intriguing. The main downside to Ruby is the lack of documentation (at least in English). The only documentation I've been able to find is the online version of Programming Ruby. I do like the instance variables though (those prefixed with a @ -- it's like %self in Perl OO). Many of the features work better than Perl OO, but that should change with Perl 6. St. Larry has dabbled in Ruby a little and found some features that he's going to incorporate into Perl. Ruby is definitely worth a look. - p u n k k i d "Reality is merely an illusion, albeit a very persistent one." -Albert Einstein. package Logger; { my $instance; sub instance { return $instance if $instance; my $class = shift; $instance = bless {@_} => $class; } } [download] -- Abigail As I didn't trust my own rolling of a Perl Singleton I grabbed that code directly from from CPAN's Singleton.pm. Though I'm guessing that the use of the Trinary in the CPAN version might give a performance boost over the clearer functionality your code expresses. Isn't that why it's called a singleton? def fibUpTo(max) i1, i2 = 1, 1 # parallel assignment while i1 <= max yield i1 i1, i2 = i2, i1+i2 end end fibUpTo(1000) { |f| print f, " " } [download]). use strict; use warnings; sub fib (&$) { my ($action,$max)= @_; my ($i1, $i2)= (1,1); while ($i1 < $max) { $action->($i1); ($i1,$i2) = ($i2, $i1+$i2); } } fib {print "$_[0] "} 1000; .
http://www.perlmonks.org/index.pl/jacques?node_id=87383
CC-MAIN-2016-18
refinedweb
756
69.62
PHP Cookbook/Forms From WikiContent Current revision Introduction The genius of PHP is its seamless integration of form variables into your programs. It makes web programming smooth and simple, from web form to PHP code to HTML output. There's no built-in mechanism in HTTP to allow you to save information from one page so you can access it in other pages. That's because HTTP is a stateless protocol. Recipe 9.2, Recipe 9.4, Recipe 9.5, and Recipe 9.6 all show ways to work around the fundamental problem of figuring out which user is making which requests to your web server. Processing data from the user is the other main topic of this chapter. You should never trust the data coming from the browser, so it's imperative to always validate all fields, even hidden form elements. Validation takes many forms, from ensuring the data match certain criteria, as discussed in Recipe 9.3, to escaping HTML entities to allow the safe display of user entered data, as covered in Recipe 9.9. Furthermore, Recipe 9.8 tells how to protect the security of your web server, and Recipe 9.7 covers how to process files uploaded by a user. Whenever PHP processes a page, it checks for GET and POST form variables, uploaded files, applicable cookies, and web server and environment variables. These are then directly accessible in the following arrays: $_GET , $_POST, $_FILES, $_COOKIE, $_SERVER, and $_ENV. They hold, respectively, all variables set by GET requests, POST requests, uploaded files, cookies, the web server, and the environment. There's also $_REQUEST , which is one giant array that contains the values from the other six arrays. When placing elements inside of $_REQUEST, if two arrays both have a key with the same name, PHP falls back upon the variables_order configuration directive. By default, variables_order is EGPCS (or GPCS, if you're using the php.ini-recommended configuration file). So, PHP first adds environment variables to $_REQUEST and then adds GET, POST, cookie, and web server variables to the array, in this order. For instance, since C comes after P in the default order, a cookie named username overwrites a POST variable named username. If you don't have access to PHP's configuration files, you can use ini_get( ) to check a setting: print ini_get('variables_order'); EGPCS You may need to do this because your ISP doesn't let you view configuration settings or because your script may run on someone else's server. You can also use phpinfo( ) to view settings. However, if you can't rely on the value of variables_order, you should directly access $_GET and $_POST instead of using $_REQUEST. The arrays containing external variables, such as $_REQUEST, are superglobals. As such, they don't need to be declared as global inside of a function or class. It also means you probably shouldn't assign anything to these variables, or you'll overwrite the data stored in them. Prior to PHP 4.1, these superglobal variables didn't exist. Instead there were regular arrays named $HTTP_COOKIE_VARS, $HTTP_ENV_VARS, $HTTP_GET_VARS, $HTTP_POST_VARS, $HTTP_POST_FILES, and $HTTP_SERVER_VARS. These arrays are still available for legacy reasons, but the newer arrays are easier to work with. These older arrays are populated only if the track_vars configuration directive is on, but, as of PHP 4.0.3, this feature is always enabled. Finally, if the register_globals configuration directive is on, all these variables are also available as variables in the global namespace. So, $_GET['password'] is also just $password. While convenient, this introduces major security problems because malicious users can easily set variables from the outside and overwrite trusted internal variables. Starting with PHP 4.2, register_globals defaults to off. With this knowledge, here is a basic script to put things together. The form asks the user to enter his first name, then replies with a welcome message. The HTML for the form looks like this: <form action="/hello.php" method="post"> What is your first name? <input type="text" name="first_name"> <input type="submit" value="Say Hello"> </form> The name of the text input element inside the form is first_name. Also, the method of the form is post. This means that when the form is submitted, $_POST['first_name'] will hold whatever string the user typed in. (It could also be empty, of course, if he didn't type anything.) For simplicity, however, let's assume the value in the variable is valid. (The term "valid" is open for definition, depending on certain criteria, such as not being empty, not being an attempt to break into the system, etc.) This allows us to omit the error checking stage, which is important but gets in the way of this simple example. So, here is a simple hello.php script to process the form: echo 'Hello ' . $_POST['first_name'] . '!'; If the user's first name is Joe, PHP prints out: Hello Joe! Processing Form Input Problem You want to use the same HTML page to emit a form and then process the data entered into it. In other words, you're trying to avoid a proliferation of pages that each handle different steps in a transaction. Solution Use a hidden field in the form to tell your program that it's supposed to be processing the form. In this case, the hidden field is named stage and has a value of process: if (isset($_POST['stage']) && ('process' == $_POST['stage'])) { process_form(); } else { print_form(); } Discussion During the early days of the Web, when people created forms, they made two pages: a static HTML page with the form and a script that processed the form and returned a dynamically generated response to the user. This was a little unwieldy, because form.html led to form.cgi and if you changed one page, you needed to also remember to edit the other, or your script might break. Forms are easier to maintain when all parts live in the same file and context dictates which sections to display. Use a hidden form field named stage to track your position in the flow of the form process; it acts as a trigger for the steps that return the proper HTML to the user. Sometimes, however, it's not possible to design your code to do this; for example, when your form is processed by a script on someone else's server. When writing the HTML for your form, however, don't hardcode the path to your page directly into the action. This makes it impossible to rename or relocate your page without also editing it. Instead, PHP supplies a helpful variable: $_SERVER['PHP_SELF'] This variable is an alias to the URL of the current page. So, set the value of the action attribute to that value, and your form always resubmits, even if you've moved the file to a new place on the server. So, the example in the introduction of this chapter is now: if (isset($_POST['stage']) && ('process' == $_POST['stage'])) { process_form(); } else { print_form(); } function print_form() { echo <<<END <form action="$_SERVER[PHP_SELF]" method="post"> What is your first name? <input type="text" name="first_name"> <input type="hidden" name="stage" value="process"> <input type="submit" value="Say Hello"> </form> END; } function process_form() { echo 'Hello ' . $_POST['first_name'] . '!'; } If your form has more than one step, just set stage to a new value for each step. See Also Recipe 9.4 for handling multipage forms. Validating Form Input Problem You want to ensure data entered from a form passes certain criteria. Solution Create a function that takes a string to validate and returns true if the string passes a check and false if it doesn't. Inside the function, use regular expressions and comparisons to check the data. For example, Example 9-1 shows the pc_validate_zipcode( ) function, which validates a U.S. Zip Code. Example 9-1. pc_validate_zipcode( ) function pc_validate_zipcode($zipcode) { return preg_match('/^[0-9]{5}([- ]?[0-9]{4})?$/', $zipcode); } Here's how to use it: if (pc_validate_zipcode($_REQUEST['zipcode'])) { // U.S. Zip Code is okay, can proceed process_data(); } else { // this is not an okay Zip Code, print an error message print "Your ZIP Code is should be 5 digits (or 9 digits, if you're "; print "using ZIP+4)."; print_form(); } Discussion Deciding what constitutes valid and invalid data is almost more of a philosophical task than a straightforward matter of following a series of fixed steps. In many cases, what may be perfectly fine in one situation won't be correct in another. The easiest check is making sure the field isn't blank. The empty( ) function best handles this problem. Next come relatively easy checks, such as the case of a U.S. Zip Code. Usually, a regular expression or two can solve these problems. For example: /^[0-9]{5}([- ]?[0-9]{4})?$/ finds all valid U.S. Zip Codes. Sometimes, however, coming up with the correct regular expression is difficult. If you want to verify that someone has entered only two names, such as "Alfred Aho," you can check against: /^[A-Za-z]+ +[A-Za-z]+$/ However, Tim O'Reilly can't pass this test. An alternative is /^\S+\s+\S+$/; but then Donald E. Knuth is rejected. So think carefully about the entire range of valid input before writing your regular expression. In some instances, even with regular expressions, it becomes difficult to check if the field is legal. One particularly popular and tricky task is validating an email address, as discussed in Recipe 13.7. Another is how to make sure a user has correctly entered the name of her U.S. state. You can check against a listing of names, but what if she enters her postal service abbreviation? Will MA instead of Massachusetts work? What about Mass.? One way to avoid this issue is to present the user with a dropdown list of pregenerated choices. Using a select element, users are forced by the form's design to select a state in the format that always works, which can reduce errors. This, however, presents another series of difficulties. What if the user lives some place that isn't one of the choices? What if the range of choices is so large this isn't a feasible solution? There are a number of ways to solve these types of problems. First, you can provide an "other" option in the list, so that a non-U.S. user can successfully complete the form. (Otherwise, she'll probably just pick a place at random, so she can continue using your site.) Next, you can divide the registration process into a two-part sequence. For a long list of options, a user begins by picking the letter of the alphabet his choice begins with; then, a new page provides him with a list containing only the choices beginning with that letter. Finally, there are even trickier problems. What do you do when you want to make sure the user has correctly entered information, but you don't want to tell her you did so? A situation where this is important is a sweepstakes; in a sweepstakes, there's often a special code box on the entry form in which a user enters a string — AD78DQ — from an email or flier she's received. You want to make sure there are no typos, or your program won't count her as a valid entrant. You also don't want to allow her to just guess codes, because then she could try out those codes and crack the system. The solution is to have two input boxes. A user enters her code twice; if the two fields match, you accept the data as legal and then (silently) validate the data. If the fields don't match, you reject the entry and have the user fix it. This procedure eliminates typos and doesn't reveal how the code validation algorithm works; it can also prevent misspelled email addresses. Finally, PHP performs server-side validation. Server-side validation requires that a request be made to the server, and a page returned in response; as a result, it can be slow. It's also possible to do client-side validation using JavaScript. While client-side validation is faster, it exposes your code to the user and may not work if the client doesn't support JavaScript or has disabled it. Therefore, you should always duplicate all client-side validation code on the server. See Also Recipe 13.7 for a regular expression for validating email addresses; Chapter 7, "Validation on the Server and Client," of Web Database Applications with PHP and MySQL (Hugh Williams and David Lane, O'Reilly). Working with Multipage Forms Problem You want to use a form that displays more than one page and preserve data from one page to the next. Solution Use session tracking: session_start(); $_SESSION['username'] = $_GET['username']; You can also include variables from a form's earlier pages as hidden input fields in its later pages: <input type="hidden" name="username" value="<?php echo htmlentities($_GET['username']); ?>"> Discussion Whenever possible, use session tracking. It's more secure because users can't modify session variables. To begin a session, call session_start( ); this creates a new session or resumes an existing one. Note that this step is unnecessary if you've enabled session.auto_start in your php.ini file. Variables assigned to $_SESSION are automatically propagated. In the Solution example, the form's username variable is preserved by assigning $_GET['username'] to $_SESSION['username']. To access this value on a subsequent request, call session_start( ) and then check $_SESSION['username']: session_start( ); $username = htmlentities($_SESSION['username']); print "Hello $username."; In this case, if you don't call session_start( ), $_SESSION isn't set. Be sure to secure the server and location where your session files are located (the filesystem, database, etc.); otherwise your system will be vulnerable to identity spoofing. If session tracking isn't enabled for your PHP installation, you can use hidden form variables as a replacement. However, passing data using hidden form elements isn't secure because anyone can edit these fields and fake a request; with a little work, you can increase the security to a reliable level. The most basic way to use hidden fields is to include them inside your form. <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="get"> <input type="hidden" name="username" value="<?php echo htmlentities($_GET['username']); ?>"> When this form is resubmitted, $_GET['username'] holds its previous value unless someone has modified it. A more complex but secure solution is to convert your variables to a string using serialize( ) , compute a secret hash of the data, and place both pieces of information in the form. Then, on the next request, validate the data and unserialize it. If it fails the validation test, you'll know someone has tried to modify the information. The pc_encode( ) encoding function shown in Example 9-2 takes the data to encode in the form of an array. Example 9-2. pc_encode( ) $secret = 'Foo25bAr52baZ'; function pc_encode($data) { $data = serialize($data); $hash = md5($GLOBALS['secret'] . $data); return array($data, $hash); } In function pc_encode( ), the data is serialized into a string, a validation hash is computed, and those variables are returned. The pc_decode( ) function shown in Example 9-3 undoes the work of its counterpart. Example 9-3. pc_decode( ) function pc_decode($data, $hash) { if (!empty($data) && !empty($hash)) { if (md5($GLOBALS['secret'] . $data) == $hash) { return unserialize($data); } else { error_log("Validation Error: Data has been modified"); return false; } } return false; } The pc_decode( ) function recreates the hash of the secret word and compares it to the hash value from the form. If they're equal, $data is valid, so it's unserialized. If it flunks the test, the function writes a message to the error log and returns false. These functions go together like this: <?php $secret = 'Foo25bAr52baZ'; // Load in and validate old data if (! $data = pc_decode($_GET['data'], $_GET['hash'])) { // crack attempt } // Process form (new form data is in $_GET) // Update $data $data['username'] = $_GET['username']; $data['stage']++; unset($data['password']); // Encode results list ($data, $hash) = pc_encode($data); // Store data and hash inside the form ?> <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="get"> ... <input type="hidden" name="data" value="<?php echo htmlentities($data); ?>"> <input type="hidden" name="hash" value="<?php echo htmlentities($hash); ?>"> </form> At the top of the script, we pass pc_decode( ) the variables from the form for decoding. Once the information is loaded into $data, form processing can proceed by checking in $_GET for new variables and in $data for old ones. Once that's complete, update $data to hold the new values and then encode it, calculating a new hash in the process. Finally, print out the new form and include $data and $hash as hidden variables. See Also Recipe 8.6 and Recipe 8.7 for information on using the session module; Recipe 9.9 for details on using htmlentities( ) to escape control characters in HTML output; Recipe 14.4 for information on verifying data with hashes; documentation on session tracking at and in Recipe 8.5; documentation on serialize( ) at and unserialize( ) at. Redisplaying Forms with Preserved Information and Error Messages Problem When there's a problem with data entered in a form, you want to print out error messages alongside the problem fields, instead of a generic error message at the top of the form. You also want to preserve the values the user typed into the form the first time. Solution Use an array, $errors, and store your messages in the array indexed by the name of the field. if (! pc_validate_zipcode($_REQUEST['zipcode'])) { $errors['zipcode'] = "This is is a bad ZIP Code. ZIP Codes must " . "have 5 numbers and no letters."; } When you redisplay the form, you can display each error by its field and include the original value in the field: echo $errors['zipcode']; $value = isset($_REQUEST['zipcode']) ? htmlentities($_REQUEST['zipcode']) : ''; echo "<input type=\"text\" name=\"zipcode\" value=\"$value\">"; Discussion If your users encounter errors when filling out a long form, you can increase the overall usability of your form if you highlight exactly where the errors need to be fixed. Consolidating all errors in a single array has many advantages. First, you can easily check if your validation process has located any items that need correction; just use count($errors). This method is easier than trying to keep track of this fact in a separate variable, especially if the flow is complex or spread out over multiple functions. Example 9-4 shows the pc_validate_form( ) validation function, which uses an $errors array. Example 9-4. pc_validate_form( ) function pc_validate_form( ) { if (! pc_validate_zipcode($_POST['zipcode'])) { $errors['zipcode'] = "ZIP Codes are 5 numbers"; } if (! pc_validate_email($_POST['email'])) { $errors['email'] = "Email addresses look like user@example.com"; } return $errors; } This is clean code because all errors are stored in one variable. You can easily pass around the variable if you don't want it to live in the global scope. Using the variable name as the key preserves the links between the field that caused the error and the actual error message itself. These links also make it easy to loop through items when displaying errors. You can automate the repetitive task of printing the form; the pc_print_form() function in Example 9-5 shows how. Example 9-5. pc_print_form( ) function pc_print_form($errors) { $fields = array('name' => 'Name', 'rank' => 'Rank', 'serial' => 'Serial'); if (count($errors)) { echo 'Please correct the errors in the form below.'; } echo '<table>'; // print out the errors and form variables foreach ($fields as $field => $field_name) { // open row echo '<tr><td>'; // print error if (!empty($errors[$field])) { echo $errors[$field]; } else { echo ' '; // to prevent odd looking tables } echo "</td><td>"; // print name and input $value = isset($_REQUEST[$field]) ? htmlentities($_REQUEST[$field]) : ''; echo "$field_name: "; echo "<input type=\"text\" name=\"$field\" value=\"$value\">"; echo '</td></tr>'; } echo '</table>'; } The complex part of pc_print_form( ) comes from the $fields array. The key is the variable name; the value is the pretty display name. By defining them at the top of the function, you can create a loop and use foreach to iterate through the values; otherwise, you need three separate lines of identical code. This integrates with the variable name as a key in $errors, because you can find the error message inside the loop just by checking $errors[$field]. If you want to extend this example beyond input fields of type text, modify $fields to include more meta-information about your form fields: $fields = array('name' => array('name' => 'Name', 'type' => 'text'), 'rank' => array('name' => 'Rank', 'type' => 'password'), 'serial' => array('name' => 'Serial', 'type' => 'hidden') ); See Also Recipe 9.3 for simple form validation. Guarding Against Multiple Submission of the Same Form Problem You want to prevent people from submitting the same form multiple times. Solution Generate a unique identifier and store the token as a hidden field in the form. Before processing the form, check to see if that token has already been submitted. If it hasn't, you can proceed; if it has, you should generate an error. When creating the form, use uniqid( ) to get a unique identifier: <?php $unique_id = uniqid(microtime(),1); ... ?> <input type="hidden" name="unique_id" value="<?php echo $unique_id; ?>"> </form> Then, when processing, look for this ID: $unique_id = $dbh->quote($_GET['unique_id']); $sth = $dbh->query("SELECT * FROM database WHERE unique_id = $unique_id"); if ($sth->numRows( )) { // already submitted, throw an error } else { // act upon the data } Discussion For a variety of reasons, users often resubmit a form. Usually it's a slip-of-the-mouse: double-clicking the Submit button. They may hit their web browser's Back button to edit or recheck information, but then they re-hit Submit instead of Forward. It can be intentional: they're trying to stuff the ballot box for an online survey or sweepstakes. Our Solution prevents the nonmalicious attack and can slow down the malicious user. It won't, however, eliminate all fraudulent use: more complicated work is required for that. The Solution does prevent your database from being cluttered with too many copies of the same record. By generating a token that's placed in the form, you can uniquely identify that specific instance of the form, even when cookies is disabled. When you then save the form's data, you store the token alongside it. That allows you to easily check if you've already seen this form and record the database it belongs to. Start by adding an extra column to your database table — unique_id — to hold the identifier. When you insert data for a record, add the ID also. For example: $username = $dbh->quote($_GET['username']); $unique_id = $dbh->quote($_GET['unique_id']); $sth = $dbh->query("INSERT INTO members ( username, unique_id) VALUES ($username, $unique_id)"); By associating the exact row in the database with the form, you can more easily handle a resubmission. There's no correct answer here; it depends on your situation. In some cases, you'll want to ignore the second posting all together. In others, you'll want to check if the record has changed, and, if so, present the user with a dialog box asking if they want to update the record with the new information or keep the old data. Finally, to reflect the second form submission, you could update the record silently, and the user never learns of a problem. All these possibilities should be considered given the specifics of the interaction. Our opinion is there's no reason to allow the deficits of HTTP to dictate the user experience. So, while the third choice, silently updating the record, isn't what normally happens, in many ways this is the most natural option. Applications we've developed with this method are more user friendly; the other two methods confuse or frustrate most users. It's tempting to avoid generating a random token and instead use a number one greater then the number of records already in the database. The token and the primary key will thus be the same, and you don't need to use an extra column. long random token, however, can't be guessed merely by moving to a different integer. See Also Recipe 14.4 for more details on verifying data with hashes; documentation on uniqid( ) at. Processing Uploaded Files Problem You want to process a file uploaded by a user. Solution Use the $_FILES array: // from <input name="event" type="file"> if (is_uploaded_file($_FILES['event']['tmp_name'])) { readfile($_FILES['event']['tmp_name']); // print file on screen } Discussion Starting in PHP 4.1, all uploaded files appear in the $_FILES superglobal array. For each file, there are four pieces of information: - name - The name assigned to the form input element - type - The MIME type of the file - size - The size of the file in bytes - tmp_name - The location in which the file is temporarily stored on the server. If you're using an earlier version of PHP, you need to use $HTTP_POST_FILES instead. After you've selected a file from that array, use is_uploaded_file( ) to confirm that the file you're about to process is a legitimate file resulting from a user upload, then process it as you would other files on the system. Always do this. If you blindly trust the filename supplied by the user, someone can alter the request and add names such as /etc/passwd to the list for processing. You can also move the file to a permanent location; use move_uploaded_file( ) to safely transfer the file: // move the file: move_uploaded_file() also does a check of the file's // legitimacy, so there's no need to also call is_uploaded_file() move_uploaded_file($_FILES['event']['tmp_name'], '/path/to/file.txt'); Note that the value stored in tmp_name is the complete path to the file, not just the base name. Use basename( ) to chop off the leading directories if needed. Be sure to check that PHP has permission to read and write to both the directory in which temporary files are saved (see the upload_tmp_dir configuration directive to check where this is) and the location in which you're trying to copy the file. This can often be user nobody or apache, instead of your personal username. Because of this, if you're running under safe_mode, copying a file to a new location will probably not allow you to access it again. Processing files can often be a subtle task because not all browsers submit the same information. It's important to do it correctly, however, or you open yourself up to a possible security hole. You are, after all, allowing strangers to upload any file they choose to your machine; malicious people may see this as an opportunity to crack into or crash the computer. As a result, PHP has a number of features that allow you to place restrictions on uploaded files, including the ability to completely turn off file uploads all together. So, if you're experiencing difficulty processing uploaded files, check that your file isn't being rejected because it seems to pose a security risk. To do such a check first, make sure file_uploads is set to On inside your configuration file. Next, make sure your file size isn't larger than upload_max_filesize; this defaults to 2 MB, which stops someone trying to crash the machine by filling up the hard drive with a giant file. Additionally, there's a post_max_size directive, which controls the maximum size of all the POST data allowed in a single request; its initial setting is 8 MB. From the perspective of browser differences and user error, if you can't get $_FILES to populate with information, make sure you add enctype="multipart/form-data" to the form's opening tag; PHP needs this to trigger processing. If you can't do so, you need to manually parse $HTTP_RAW_POST_DATA. (See RFCs 1521 and 1522 for the MIME specification at and.) Also, if no file is selected for uploading, versions of PHP prior to 4.1 set tmp_name to none; newer versions set it to the empty string. PHP 4.2.1 allows files of length 0. To be sure a file was uploaded and isn't empty (although blank files may be what you want, depending on the circumstances), you need to make sure tmp_name is set and size is greater than 0. Last, not all browsers necessarily send the same MIME type for a file; what they send depends on their knowledge of different file types. See Also Documentation on handling file uploads at and on basename() at. Securing PHP's Form Processing Problem You want to securely process form input variables and not allow someone to maliciously alter variables in your code. Solution Disable the register_globals configuration directive and access variables only from the $_REQUEST array. To be even more secure, use $_GET , $_POST, and $_COOKIE to make sure you know exactly where your variables are coming from. To do this, make sure this line appears in your php.ini file: register_globals = Off As of PHP 4.2, this is the default configuration. Discussion When register_globals is set. Here is a simple example. You have a page in which a user enters a username and password. If they are validated, you return her user identification number and use that numerical identifier to look up and print out her personal information: // assume magic_quotes_gpc is set to Off $username = $dbh->quote($_GET['username']); $password = $dbh->quote($_GET['password']); $sth = $dbh->query("SELECT id FROM users WHERE username = $username AND password = $password"); if (1 == $sth->numRows( )) { $row = $sth->fetchRow(DB_FETCHMODE_OBJECT); $id = $row->id; } else { "Print bad username and password"; } if (!empty($id)) { $sth = $dbh->query("SELECT * FROM profile WHERE id = $id"); } Normally, $id is set only by your program and is a result of a verified database lookup. However, if someone alters the GET string, and passes in a value for $id, with register_globals enabled, even after a bad username and password lookup, your script still executes the second database query and returns results. Without register_globals, $id remains unset because only $_REQUEST['id'] (and $_GET['id']) are set. Of course, there are other ways to solve this problem, even when using register_globals. You can restructure your code not to allow such a loophole. $sth = $dbh->query("SELECT id FROM users WHERE username = $username AND password = $password"); if (1 == $sth->numRows( )) { $row = $sth->fetchRow(DB_FETCHMODE_OBJECT); $id = $row->id; if (!empty($id)) { $sth = $dbh->query("SELECT * FROM profile WHERE id = $id"); } } else { "Print bad username and password"; } Now you use $id only when it's been explicitly set from a database call. Sometimes, however, it is difficult to do this because of how your program is laid out. Another solution is to manually unset( ) or initialize all variables at the top of your script: unset($id); This removes the bad $id value before it gets a chance to affect your code. However, because PHP doesn't require variable initialization, it's possible to forget to do this in one place; a bug can then slip in without a warning from PHP. See Also Documentation on register_globals at. Escaping Control Characters from User Data Problem You want to securely display user-entered data on an HTML page. Solution For HTML you wish to display as plain text, with embedded links and other tags, use htmlentities( ) : echo htmlentities('<p>O'Reilly & Associates</p>'); <p>O'Reilly & Associates</p> Discussion PHP has a pair of functions to escape characters in HTML. The most basic is htmlspecialchars( ) , which escapes four characters: < > " and &. Depending on optional parameters, it can also translate ' instead of or in addition to ". For more complex encoding, use htmlentities( ); it expands on htmlspecialchars( ) to encode any character that has an HTML entity. $Stew's favorite movie.</a> <a href="fletch.html">Stew's favorite movie.</a> <a href="fletch.html">Stew's favorite movie.</a> Both functions allow you to pass in a character encoding table that defines what characters map to what entities. To retrieve either table used by the previous functions, use get_html_translation_table( ) and pass in HTML_ENTITIES or HTML_SPECIALCHARS. This returns an array that maps characters to entities; you can use it as the basis for your own table. $copyright = "Copyright © 2003 O'Reilly & Associates\n"; $table = get_html_translation_table(); // get <, >, ", and & $table[©] = '©' // add © print strtr($copyright, $table); Copyright © 2003 O'Reilly & Associates See Also Recipe 13.9, Recipe 18.21, and Recipe 10.8; documentation on htmlentities( ) at and htmlspecialchars( ) at. Handling Remote Variables with Periods in Their Names Problem You want to process a variable with a period in its name, but when a form is submitted, you can't find the variable. Solution Replace the period in the variable's name with an underscore. For example, if you have a form input element named foo.bar, you access it inside PHP as the variable $_REQUEST['foo_bar']. Discussion Because PHP uses the period as a string concatenation operator, a form variable called animal.height is automatically converted to animal_height, which avoids creating an ambiguity for the parser. While $_REQUEST['animal.height'] lacks these ambiguities, for legacy and consistency reasons, this happens regardless of your register_globals settings. You usually deal with automatic variable name conversion when you process an image used to submit a form. For instance: you have a street map showing the location of your stores, and you want people to click on one for additional information. Here's an example: <input type="image" name="locations" src="locations.gif"> When a user clicks on the image, the x and y coordinates are submitted as locations.x and locations.y. So, in PHP, to find where a user clicked, you need to check $_REQUEST['locations_x'] and $_REQUEST['locations_y']. It's possible, through a series of manipulations, to create a variable inside PHP with a period: ${"a.b"} = 123; // forced coercion using {} $var = "c.d"; // indirect variable naming $$var = 456; print ${"a.b"} . "\n"; print $$var . "\n"; 123 456 This is generally frowned on because of the awkward syntax. See Also Documentation on variables from outside PHP at. Using Form Elements with Multiple Options Problem You have a form element with multiple values, such as a checkbox or select element, but PHP sees only one value. Solution Place brackets ([ ]) after the variable name: <input type="checkbox" name="boroughs[]" value="bronx"> The Bronx <input type="checkbox" name="boroughs[]" value="brooklyn"> Brooklyn <input type="checkbox" name="boroughs[]" value="manhattan"> Manhattan <input type="checkbox" name="boroughs[]" value="queens"> Queens <input type="checkbox" name="boroughs[]" value="statenisland"> Staten Island Inside your program, treat the variable as an array: print 'I love ' . join(' and ', $boroughs) . '!'; Discussion By placing [ ] after the variable name, you tell PHP to treat it as an array instead of a scalar. When it sees another value assigned to that variable, PHP auto-expands the size of the array and places the new value at the end. If the first three boxes in the Solution were checked, it's as if you'd written this code at the top of the script: $boroughs[ ] = "bronx"; $boroughs[ ] = "brooklyn"; $boroughs[ ] = "manhattan"; You can use this to return information from a database that matches multiple records: foreach ($_GET['boroughs'] as $b) { $boroughs[ ] = strtr($dbh->quote($b),array('_' => '\_', '%' => '\%')); } $locations = join(',', $boroughs); $dbh->query("SELECT address FROM locations WHERE borough IN ($locations)"); This syntax also works with multidimensional arrays: <input type="checkbox" name="population[NY][NYC]" value="8008278">New York... If checked, this form element sets $population['NY']['NYC'] to 8008278. Placing a [ ] after a variable's name can cause problems in JavaScript when you try to address your elements. Instead of addressing the element by its name, use the numerical ID. You can also place the element name inside single quotes. Another way is to assign the element an ID, perhaps the name without the [ ], and use that ID instead. Given: <form> <input type="checkbox" name="myName[]" value="myValue" id="myName"> </form> the following three refer to the same form element: document.forms[0].elements[0]; // using numerical IDs document.forms[0].elements['myName[ ]']; // using the name with quotes document.forms[0].elements['myName']; // using ID you assigned See Also The introduction to Chapter 4 for more on arrays. Creating Dropdown Menus Based on the Current Date Problem You want to create a series of dropdown menus that are based automatically on the current date. Solution Use date( ) to find the current time in the web server's time zone and loop through the days with mktime( ). The following code generates option values for today and the six days that follow. In this case, "today" is January 1, 2002. list($hour, $minute, $second, $month, $day, $year) = split(':', date('h:i:s:m:d:Y')); // print out one week's worth of days for ($i = 0; $i < 7; ++$i) { $timestamp = mktime($hour, $minute, $second, $month, $day + $i, $year); $date = date("D, F j, Y", $timestamp); print "<option value=\"$timestamp\">$date</option>\n"; } <option value="946746000">Tue, January 1, 2002</option> <option value="946832400">Wed, January 2, 2002</option> <option value="946918800">Thu, January 3, 2002</option> <option value="947005200">Fri, January 4, 2002</option> <option value="947091600">Sat, January 5, 2002</option> <option value="947178000">Sun, January 6, 2002</option> <option value="947264400">Mon, January 7, 2002</option> Discussion In the Solution, we set the value for each date as its Unix timestamp representation because we find this easier to handle inside our programs. Of course, you can use any format you find most useful and appropriate. Don't be tempted to eliminate the calls to mktime( ); dates and times aren't as consistent as you'd hope. Depending on what you're doing, you might not get the results you want. For example: $timestamp = mktime(0, 0, 0, 10, 24, 2002); // October 24, 2002 $one_day = 60 * 60 * 24; // number of seconds in a day // print out one week's worth of days for ($i = 0; $i < 7; ++$i) { $date = date("D, F j, Y", $timestamp); print "<option value=\"$timestamp\">$date</option>"; $timestamp += $one_day; } <option value="972619200">Fri, October 25, 2002</option> <option value="972705600">Sat, October 26, 2002</option> <option value="972792000">Sun, October 27, 2002</option> <option value="972878400">Sun, October 27, 2002</option> <option value="972964800">Mon, October 28, 2002</option> <option value="973051200">Tue, October 29, 2002</option> <option value="973137600">Wed, October 30, 2002</option> This script should print out the month, day, and year for a seven-day period starting October 24, 2002. However, it doesn't work as expected. Why are there two "Sun, October 27, 2002"s? The answer: daylight saving time. It's not true that the number of seconds in a day stays constant; in fact, it's almost guaranteed to change. Worst of all, if you're not near either of the change-over dates, you're liable to miss this bug during testing. See Also Chapter 3, particularly Recipe 3.13, but also Recipe 3.2, Recipe 3.3, Recipe 3.5, Recipe 3.11, and Recipe 3.14; documentation on date( ) at and mktime( ) at.
http://commons.oreilly.com/wiki/index.php?title=PHP_Cookbook/Forms&diff=prev&oldid=7321
CC-MAIN-2018-17
refinedweb
6,490
54.93
Are you wondering how to write AWS lambda with Deno ? this is going to be a long post. So please be patient and read the steps carefully. Do also comment in case you have any questions ! Before we begin to discuss the steps for writing AWS lambda with Deno, first you need to know that AWS Lambda supports a limited number of programming Languages as of now. AWS Lambda supports following languages - Node.js - Python with these versions - 2.7 - 3.X - Java with these versions - 8 - 11 - .NET core framework - Ruby - Go In a later post I shall be creating a react-app that will use the deno based LAMBDA function which we are going to create in this post The list of programming languages above we do mention Deno.js Therefore AWS lambda with Deno is not supported by default. Do you think it’s because Deno.js is fairly new? Well if that is the case then what about support for php ? Therefore the question is why AWS doesn’t support all programming languages when writing Lambda? Because AWS leverages the platform’s “custom-runtime” feature to let developer extend their own choice of programming language As a result, to manage and run application code with your choice of runtime, you will need to pass the runtime. Therefore Deno.js or simply put “Deno” can be supported with AWS Lambda function by creating our “custom-runtime” for it ! How do you create a custom runtime to run aws lambda with Deno ? List of steps to follow to run AWS Lambda with Deno runtime - Firstly, we need the below files - And after that we will write a “bootstrap” file and package it as part of your lambda function’s zip archive. - As a result “bootstrap” file is going to have the instructions for loading Deno runtime - Also “bootstrap” file will run the command “deno run FILENAME.ts –allow- …” inside default shell for our lambda function. Wait did I just say that there is a shell that you can access to run your aws lambda with Deno ? Yes that’s exactly what we are going to do. And just to support this statement , I would like bring the point here that for every Lambda runtime, there is always a backing AMI . Therefore the shell command that we are going to be executing- would actually run inside the shell for the AMI on which we are going to be building our Deno runtime !!!. (We have done this heavy lifting for you ). And as part of the code you will fine a “deno” runtime, created from a compatible AMI . Therefore No need to worry. This is how our bootstrap file is going to look like #!/bin/sh set -euo pipefail Firstly, this line will report which was the last command that failed during our execution. If nothing fails, the code will continue to execute further. SCRIPT_DIR=$(cd $(dirname $0); pwd) HANDLER_NAME=$(echo "$_HANDLER" | cut -d. -f2) HANDLER_FILE=$(echo "$_HANDLER" | cut -d. -f1) After that these lines are going to create variable for HANDLER_NAME and HANDLER_FILE. We are using the “_HANDLER” variable to create the variables. Since the Lambda runtime sets this “_HANDLER” variable. Follow the video in case you want to see the demo export DENO_DIR=/tmp/deno_dir Next we need to export the DENO_DIR for writing aws lambda with Deno.js echo " ... SOME CODE ... " > /tmp/runtime.ts And after that we need to write some code for creating the lambda context to run aws lambda with Deno.js - Finally, we can execute the “deno run … ” command to execute our runtime $SCRIPT_DIR/deno run --allow-net --allow-read /tmp/runtime.ts - Let’s see how we are going to replace the block “SOME CODE …” Firstly, we will be adding the below lines of code import { $HANDLER_NAME } from '$LAMBDA_TASK_ROOT/$HANDLER_FILE.ts'; const _APIROOT = '{AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/'; const _query = 'Lambda-Runtime-Aws-Request-Id'; And then this two lines will import the Handler file and create api root for invocation (async () => { while (true) { ... actual custom runtime code here ... } })(); After that we can create an async function . This function will run infinitely. We will use “while(true)” And finally, the infinite execution is to ensure that lambda context remains valid for future invocation, even after the current execution has finished Custom runtime code inside while(true) block const next = await fetch(_APIROOT + 'next'); const requestId = next.headers.get(_query); const res = await $HANDLER_NAME(await next.json()); To start with The above three lines will - Firstly I declared “next'” invocation variable, - Then I created a requestId and - Finally executed the handler function to generate the response code In the end once the response is generated , I called the Lambda root API with the response. await (await fetch( _APIROOT + requestId + '/response', { method: 'POST', body: JSON.stringify(res) } )).blob();
https://www.codegigs.app/aws-lambda-with-deno-js-runtime/
CC-MAIN-2022-05
refinedweb
805
63.7
Hello ! Fisrt thank you for reading this and I'm even more greatful if you can help me. So, I'm currently building a robot which avoid walls and obtacles using infrared LED. At first I just want my robot to stop when any obtacle is detected, even if it's just by one LED. But when I compile my code, there is just nothing going on and the robot dont move even if the way is clear. My code : #include "AlphaBot.h" int LSensorPin = 7; int RSensorPin = 8; int LSensor; //Left Infrared Proximity Sensor signal value int RSensor; //Right Infrared Proximity Sensor signal value AlphaBot Car1 = AlphaBot(); void ProximityConfig() { pinMode(RSensorPin, INPUT); //Define the input pin of Right Infrared Proximity Sensor pinMode(LSensorPin, INPUT); //Define the input pin of Left Infrared Proximity Sensor } void setup() { ProximityConfig(); Car1.SetSpeed(0); // Serial.begin(9600); } void loop() { RSensor = digitalRead(RSensorPin); LSensor = digitalRead(LSensorPin); if (LSensor == HIGH && RSensor == HIGH) //If both sensor are clear, run forward Car1.Forward(); else if (LSensor == LOW or RSensor == LOW) //If the right or left sensor has a signal, stop the robot Car1.SetSpeed(0); } Can anyone figure out what's going on ?
https://forum.arduino.cc/t/surprisingly-my-code-doesnt-work/530008
CC-MAIN-2022-05
refinedweb
195
51.28
#include <iostream> #include <cmath> using namespace std; int main(void) { double X, Z, angle, f, R, L, C; const double PI = 4.0 * atan(1.0); cout << "SERIES RLC CIRCUIT CALCULATION \n"; cout << "Please enter values for frequency, resistance, inductance, and capacitance. \n"; cout << "This program will calculate the impedance and the phase angle of the circuit. \n"; do { cout << "Enter the value of the frequency in hertz: "; cin >> f; if (f < 0) { cout << "Frequency must be non-negative \n"; } else if (f == 0) { cout << "Zero frequency entered. Program terminated. \n"; cout << "Goodbye. \n"; return 0; } } while (f < 0); do { cout << "Enter the value of the resistor in ohms: "; cin >> R; if (R < 0) { cout << "Resistance must be non-negative. \n"; } } while (R < 0); do { cout << "Enter the value of the inductor in henrys: "; cin >> L; if (L < 0) { cout << "Inductance must be non-negative. \n"; } } while (L < 0); do { cout << "Enter the value of the capacitor in farads: "; cin >> C; if (C < 0) { cout << "Capacitance must be positive. \n"; } } while (C < 0); cout << "\n"; X = (2*PI*f*L) - (1/ (2*PI*f*C)); Z = sqrt((R*R) + (X*X)); angle = (atan(X/R))*(180/PI); cout << "The impedance of this series RLC circuit is " << Z << " ohms \n"; cout << "The phase angle is " << angle << " degrees \n\n"; fflush(stdin); getc(stdin); return 0; } everything calculates properly but my problem is that i want it to continue to loop after the user finishes making a calculation. so after it displays the impedance and phase angle it should go back to where it asks the user to enter the frequency. what should i do and where should i put in a extra command? thaNks
http://www.dreamincode.net/forums/topic/61270-the-loop-is-ending-before-i-want-it-to/
CC-MAIN-2017-13
refinedweb
283
55.98
#include "stdafx.h" #include <iostream> #include <vector> #include <algorithm> #include <string> #include <iomanip> using namespace std; vector<string>::iterator findString(vector<string> word, string x) { auto beg = word.begin(); for ( ; beg != word.end() && x != *beg; beg++); return beg; } vector<int>::iterator convert(vector<string>::iterator &t, vector<string> &word, vector<int> &amount) { vector<int>::iterator count = amount.begin(); vector<string>::iterator beg = word.begin(); for (; beg != word.end() && beg != t; count++, beg++); return count; } int main() { vector<string> word; vector<int> amount; string x; cout << "Please enter words list:" << endl; cin >> x; word.push_back(x); amount.push_back(1); while (cin >> x) { vector<string>::iterator t = findString(word, x); if (t != word.end()) //A: Error happens in this line. { vector<int>::iterator i = convert(t,word,amount); (*i) ++; } else { word.push_back(x); amount.push_back(1); } } for (auto r : word) cout << setw(4) << r; cout << endl; for (auto r : amount) cout << setw(4) << r; cout << endl; return 0; } Your error doesn't have anything to do with strings or ints or conversions. It's because you're creating a new copy of vector<string> word every time you call findString, and so its returned iterator is an iterator of a different vector (it's an iterator into a copy). You have: vector<string>::iterator findString(vector<string> word, string x) { ... } And then later, where the error is: vector<string>::iterator t = findString(word, x); if (t != word.end()) //A: Error happens in this line. And so findString is returning an iterator into a copy of word instead of word itself, and MS' vector implementation has an assertion failure when you try to compare it with an iterator from a different vector, because it figures you most likely made a mistake. Instead pass your vectors by reference, for example: vector<string>::iterator findString(vector<string> & word, string x) { ... } Note that word is now passed by reference, and so won't be copied. That said, you should fix your error first as an exercise, of course, but after you've got your own code working, consider replacing findString with std::find (since it already exists). Also it seems you can simplify your code greatly by using a map<string,int> to maintain word counts. And a few other things.
https://codedump.io/share/21TKoTcWgpiQ/1/why-are-the-iterators-not-compatible
CC-MAIN-2017-34
refinedweb
375
55.64
now the VPN Server is reachable from the Internet, make nmd vpn for pc how to use sure your web browser can access to any web sites. Your computer might not be connected to the Internet. Anywhere via the VPN Azure Cloud Servers. If the "Status: Connected" never comes,continue reading. September 25, "Portal's smart camera follows the action, keeping nmd vpn for pc how to use you in frame and everyone in view states the promo for the product. Nmd vpn for pc how to use 15, n nmd vpn for pc how to use Windows. 2016 No comments Article n.can an embedded device running Windows 10 IoT Core be configured to establish a nmd vpn for pc how to use site-to-site VPN connection with a VPN server hosted on some private cloud? if your system administrator doesn't permit it, you should take a permission from his superior instead. However, you should obtain nmd vpn for pc how to use a kali network manager problem permission from your system administrators by mouth if your company has a rule to require to do so.tested (Default)) X nmd vpn for pc how to use c ofor Rin logr. "the" or "where will mean the results take longer to appear due to the number of them. Exe Detected by Microsoft as TrojanSpy:MSIL /n!B and by. Startup Item or Name Status Command or Data Description. 11.57 MB For Windows 7, Windows 8 (64-bit, 32-bit) / Vista / XP(Shareware) avast! Antivirus Antivirus Software - A very fast and effective free virus scanner produced by avast 71.3 MB For Windows 7, Windows 8 (64-bit, 32-bit) / Vista / XP(Free) 360 Total Security. Nmd vpn for pc how to use in USA and United Kingdom! survey Manual Anchor: nmd vpn for pc how to use #i1009768 parent tract A parent tract is the whole of the property of which a parcel taking is being made.amendment to the Privacy Policy The privacy policy of this site will be amended at any time in accordance with nmd vpn for pc how to use the needs. the associated VPN azure site to site vpn cannot ping tunnel automatically opened, when the user click on one of the Remote Desktop Sharing session, this feature enables a user to share his machine on the corporate network from a remote location like home.for those nmd vpn for pc how to use that are unfamiliar, one of the great benefits deploying Sophos UTM in your home network is the ability to configure a VPN with incredible ease. Setup a Sophos UTM SSL VPN In 7 Simple Steps! : Kindle Paperwhite 3 Kindle Voyage. : , 6- e-ink Carta. , . Voyage - . . . vPN- Premium.,,the file is located nmd vpn for pc how to use in AppData. since my main nmd vpn for pc how to use desktop is Ubuntu 18.04 I found an alternative client (SNX)) which worked until some weeks ago. However, on Windows I can connect to a VPN with the Check Point Endpoint Security client.visit Site Features NordVPN has a number of security features that make it particularly unique and attractive for nmd vpn for pc how to use users looking to protect their privacy.this feature enables a nmd vpn for pc how to use user to share his machine on the corporate network from a remote location like home. Top features Remote Desktop Sharing. Multiple Remote Desktop Sharing sessions may be configured in the 'Remote Sharing' tab.you can use VPN nmd vpn for pc how to use. By the way, you can add, ,,,.,,. advanced threat prevention, to simplify security administration, network nmd vpn for pc how to use security, check Point endpoint security solutions include data security, forensics and remote access VPN for complete endpoint protection. Nmd vpn for pc how to use call the Help Desk @ 978-HELP. Installing UTOR vpn Having trouble installing nmd vpn for pc how to use UTOR vpn?the trial version nmd vpn for pc how to use displays the information window. On start, can Proxifier run in background as Windows Service? What are the limitations of the trial version? But it stops working after 31 days from the first start. The trial version does not have any limitations,sSL VPN (WebVPN)) nmd vpn for pc how to use cisco ASA. excellent proxy switcher add -on Category: Encryption Version: (Mozilla Firefox )) Works nmd vpn for pc how to use under: Windows 8.1 / Windows NT / Windows 2000 / Windows 7 / Windows 2003 / Windows Vista / Windows XP / Windows 8.which in my opinion is a great nmd vpn for pc how to use thing to have. There are several ways to set up a VPN. In this article I will show you how to do it on a DD-WRT router,using the CLI. While the configuration of the web-based manager uses a point-and-click method, the CLI requires typing commands or uploading nmd vpn for pc how to use batches of commands from a text file, the command line interface (CLI)) is an alternative configuration tool to the web-based manager.!. Bookerly,.,,.,. you probably didn't give much thought to the screen in nmd vpn for pc how to use front of you as you made your call. If you began your video conferencing experience as a social user or in a small business, import and export functions nmd vpn for pc how to use are available both through the GUI or through direct command line options. Secured import and export functions To allow IT Managers to deploy VPN Configurations securely, ).,.article ID nmd vpn for pc how to use - Article Title.in place of PPTP, apple has publicly announced theyre moving from warning folks about PPTP to removing PPTP support altogether from Apples built-in VPN client. As part nmd vpn for pc how to use of preparing for the release of macOS Sierra and iOS 10, mail servers or groupware in your office as if you are sitting just in front of your desk in the office. In your office PC, azure Cloud bangladesh vpn server free relay server from anywhere, and be able to access any shared folders,
http://babyonboard.in/baby-massage-5-best-oils/nmd-vpn-for-pc-how-to-use.html
CC-MAIN-2019-30
refinedweb
1,042
70.53
Tell us what you think of the site. I am relatively new to python in mobu and am trying to change attributes of a certain camera via python. Specifically the Focal Length and toggling items such as Title Safe and the grid. What i can’t figure out is how to select a camera based on its name and then proceed to change its attributes. Any suggestions? Thanks! Hi, here is a quick example how to get access to the cameras in your mobu-scene: from pyfbsdk import * # we are getting an object instance of the whole scenemyScene = FBSystem().Scene # we know that myScene.Cameras is a list of objects, the cameras # Therefore we iterate through this list by every object and print out the namefor obj in myScene.Cameras: print obj.Name This is a very simple way to get any information about your scene. Look up “FBScene” in the Mobu SDK Help(Mobu-Menu->Help->Motionbuilder SDK Help). There you can find any other attributes or functions this class holds or can execute if it is instanced as an object we have access to via python. If you look for FBCamera(the camera object class) you will find a huge amount of camera attributes you can reach, e.g. FocalLength. :) If we add this knowledge to our little script: from pyfbsdk import * myScene = FBSystem().Scene for obj in myScene.Cameras: #prints out the name print obj.Name #prints out the FocalLength print obj.FocalLength That’s it for the start. :) Cheers, Chris
http://area.autodesk.com/forum/autodesk-motionbuilder/python/change-camera-attributes-via-python/
crawl-003
refinedweb
254
74.29
Processing Open XML on a server using PowerShell is a powerful and compelling scenario. The PowerTools for Open XML are a set of sample PowerShell cmdlets and source code that demonstrate an approach for creating and modifying Open XML documents using PowerShell. This blog is inactive. New blog: EricWhite.com/blog Blog TOCThe following screen-cast shows how to install, build, and use PowerTools for Open XML. The script files and source documents shown in the screen cast are attached to this page. You can download the PowerTools at. PowerTools for Open XML Processing Open XML documents using PowerShell on a server is a powerful and One of the namespaces introduced in the .NET Framework 3.0 is System.IO.Packaging (in the WindowsBase BY DARCY THOMAS PowerTools, at makes server-side document assembly…
https://blogs.msdn.microsoft.com/ericwhite/2008/06/10/powertools-screen-casts/
CC-MAIN-2017-26
refinedweb
134
59.8
. Flamegraphs are great. They show you when you've been bad and made the CPU cry. Here is a flamegraph built with React & D3. It shows some stack data I stole from the internet: You can try it out here, and see the code on GitHub. Consider it a work in progress. Releasing as open source once the animation works smoothly :) Here's how it works 👇 You have two components: <Flamegraph>, which recursively renders the data tree <FlameRect>, which renders a particular rectangle and its label Flamegraph The (<g transform="{`translate(${x}," ${y})`}="">{data.map((d, i) => {const start = data.slice(0, i).reduce((sum, d) => sum + d.value, 0);return (<react class="fragment" key={`${level}-${d.name}`}><flamerect x={xScale(start)} y={0} width={xScale(d.value)} height={RowHeight} name={d.name}>{d.children && (<flamegraph data={d.children} x={xScale(start)} y={RowHeight} width={xScale(d.value)})}</flamegraph></flamerect></react>);})}</g>);}} Our render method takes a bunch of params out of props, creates a linear D3 scale to make calculations easier, then renders an SVG grouping element. Inside that element, we loop through the data, and for each entry, we create a new <React.Fragment>. The fragment contains <FlameRect> which represents the current datapoint, and a <Flamegraph> which renders all the child nodes. We decide each element's x position based on the sum of all node values up to the current one. And we make sure the child <Flamegraph> uses the same width as the current node. This creates the neat stacking effect. The <FlameRect> component takes care of choosing a random color on initial render, highlighting when clicked, and displaying a label if there's enough space. It looks like this: (<g transform="{`translate(${x}," ${y})`}="" style={{ cursor: "pointer" }} onclick={this.onClick}><rect x={0} y={0} width={width} height={height} style={{ stroke: "white", fill: color }}>{!hideLabel && (<text x={5} y={13} style={{ fontsize: "12px" }} ref={this.labelRefCallback}>{name}</text>)}</rect></g>);}} We render a grouping element that contains a <rect> and a <text>. The rect gets sizing and color information, and the label gets a ref callback and some text. We use the ref callback to dynamically detect the size of our label and hide it if necessary. That happens in labelRefCallback. onClick, we flip the selected state. And well, that's it. You can think of this component as a basic toggle component. Uses all the same logic, just renders as a colorful rectangle instead of a button. Fin That's the basic <Flamegraph> component. Uses recursion to render a tree data structure and lets you highlight individual elements. Where it gets tricky is adding animation that lets users explore their data. Turns out deeply nested React elements are hard to animate smoothly. The worst part is how long it takes before React even propagates prop updates through the tree before the animation even starts. Once the animation is running, it's smooth as silk. But getting it started, ho boy. This warrants further research. I'll be back :) PS: version 0.1.0 is in fact on npm if you want to️
https://swizec.com/blog/tiny-react-and-d3-flamegraph-tutorial/
CC-MAIN-2021-43
refinedweb
518
68.77
Hi. Lots of (Brunei) data here: Articles explaining what the portal is about: 1 and 2. Is it Public Domain, and any and all of it can be used for OSM? Need permission first from the Government? I am really not sure about these things so please help me understand them. Thanks! asked 03 Jul '16, 07:40 raito 302●7●9●19 accept rate: 50% edited 06 Jul '16, 07:41 joost schouppe 3.3k●22●46●86 Also note the "T&C" you have to accept before you open the portal. Part of it is: All the information provided on this website is provided on an "as is" and "as available" basis and you agree that you use such information entirely at your own risk. All the information provided on this website is provided on an "as is" and "as available" basis and you agree that you use such information entirely at your own risk. What about this use? I am not a lawyer, but... That just limits the provider's liability. It does not grant you any rights. For once simple questions and answers: no & no. See as long as these are in force this is completely tabu for OpenStreetMap. answered 03 Jul '16, 09:05 SimonPoole ♦ 41.2k●13●301●653 accept rate: 19% edited 03 Jul '16, 09:06 OK. So, the portal is for viewing purposes only? Like using Google maps? You'll need to ask the Government of Brunei what they think their web portal is for. It's clear that you're not allowed to use it in any way (even just to "check things") prior to adding data to OpenStreetMap, just like you can't with Google. @raito according to the T&Cs you are likely not even allowed to look at it .... I'm not sure I understand why you say we can't even look at it.. They put the portal up and made it accessible by anyone. They also let reporters write about it in the papers, talking about how it is going to be useful for government agencies, private sectors, and general public. That is so contradicting.. Oh.. Don't get the wrong idea that I view Google maps to add data to OSM.. I was saying that this portal is only for ordinary personal use, like the ordinary personal use of Google maps. @raito government administrations screwing up the legal side of things tends to be norm, not the exception. The T&Cs are quite clear, you will need to approach the survey department and point out the contradiction. From the newspaper articles it is not really clear if the information is actually intended to be open Data or at least something in that direction. CONDITIONS OF USE (Link below map) By continuing to use this site, you are bound by the provisions of Section 5 of the Official Secrets Act. The Information shall not be taken out from Brunei Darussalam without the surveyor General's prior approval. answered 04 Jul '16, 13:05 jot 496●7●10 accept rate: 9% Ahmmm why are you reposting what I alread linked to in my answer yesterday? Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown This is the support site for OpenStreetMap. Question tags: data ×210 import ×180 legal ×61 question asked: 03 Jul '16, 07:40 question was seen: 2,926 times last updated: 06 Jul '16, 07:41 Can Google Maps take OSM data and use it in their own maps ? how to import big data? Sourcing street and road names from other maps. How to deal with copyright violations in our OSM data? Who do I ask for permission to use OSM maps? Import more osm files in to Nominatim How to get data for two countries ? [closed] Google Maps licencing terms Unclear on Derivative DB vs. Produced Work vs. Collective DB First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/50579/can-i-add-this-data-to-osm?sort=active
CC-MAIN-2020-45
refinedweb
670
73.68
0 I am trying to read a file into an array and set the min and max values. At this point in my program, I am trying to just read it into an array and then display that array. I still have more to do in the program, but I am just tyring to make sure that I am at least reading the array correct. This is what I have so far, any guidance would be helpful! #include <iostream> #include <fstream> #include <string> #include <cmath> #include <iomanip> using namespace std; int main() { // variables string filename; ifstream input; double angle, coeff; //prompt user for file cout << "Enter input file name: "; cin >> filename; // Open file and read the first data point. input.open(filename.c_str()); // will the file open? send an error if it doesn't open if(!input) { cout << "Error opening input file\n"; } else { // reading data entries input >> angle >> coeff; } for(int i = 0; i < angle; i++) { for(int j = 0; j < coeff; j++ ) { cin >> input[i][j]; cout << input[i][j]; } } system ("pause"); return 0; }
https://www.daniweb.com/programming/software-development/threads/274795/reading-a-file-into-an-array
CC-MAIN-2017-47
refinedweb
176
76.05
iApeiron @ccc Here is the project: Savethemblobs_app What improvements would you make? Please feel free to fork. iApeiron Clearly, you are a Python master! iApeiron ? iApeiron @ccc Thanks. I still get the traceback error "AttributeError: 'tuple' object has no attribute 'remove'" iApeiron @JonB Thanks. I really like this dialogue box! If I can get the responses to act as args to control my second script, then I have it made! iApeiron ! iApeiron @omz I've got this working based on the ex13 script from sys import argv script2 = argv first_name = raw_input("What is your first name? ") middle_name = raw_input("What is your middle name? ") last_name = raw_input("What is your last name? ") print "Your full name is %s %s %s." % (first_name, middle_name, last_name) But I have not got runpy to cooperate. And this doesn't use the 'raw_input' which I need. Do you have a solution? Thank you for your help iApeiron @omz Thank you. I think I have a start now, thanks to coming across this ex13-raw_input.py iApeiron Thank you. This is a start. But I also need some way to append user input as arguments for script2. Essentially, a GUI wrapper for a pre-existing commandline .py iApeiron Can anybody point me in the right direction? I'm hoping to find a basic template or example. I need a script1.py to run "script2.py arg1 arg2 arg3" I'm new to Pythonista, and not sure where to start. Thank you!
https://forum.omz-software.com/user/iapeiron/posts
CC-MAIN-2020-50
refinedweb
243
79.06
NAME shm_open, shm_unlink - shared memory object operations LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <sys/types.h> #include <sys/mman.h> int shm_open(const char *path, int flags, mode_t mode); int shm_unlink(const char *path); DESCRIPTION The shm_open() system call opens (or optionally creates) a POSIX shared memory object named path. The flags argument contains a subset of the flags used by open(2). An access mode of either O_RDONLY or O_RDWR must be included in flags. The optional flags O_CREAT, O_EXCL, and O_TRUNC may also be specified. If O_CREAT is specified, then a new shared memory object named path specified,). As a FreeBSD extension, the constant SHM_ANON may be used for the path argument to shm_open(). In this case, an anonymous, unnamed shared memory object is created. Since the object has no name, it cannot be removed via a subsequent call to shm_unlink(). Instead, the shared memory object will be garbage collected when the last reference to the shared memory object is removed. The shared memory object may be shared with other processes by sharing the file descriptor via fork(2) or sendmsg(2). Attempting to open an anonymous shared memory object with O_RDONLY will fail with EINVAL. All other flags are ignored. (‘/’). In FreeBSD, read(2) and write(2) on a shared memory object will fail with EOPNOTSUPP and neither shared memory objects nor their contents persist across reboots. ERRORS The following errors are defined for shm_open(): [EINVAL] A flag other than O_RDONLY, O_RDWR, O_CREAT, O_EXCL, or O_TRUNC was included in flags. [EMFILE] The process has already reached its limit for open file descriptors. [ENFILE] The system file table is full. [EINVAL] O_RDONLY was specified while creating an anonymous shared memory object via SHM_ANON. [EFAULT] The path argument points outside the process’ allocated address space. [ENAMETOOLONG] The entire pathname exceeded 1023 characters. [EINVAL] The path does not begin with a slash (‘/’) character. [ENOENT] O_CREAT is specified and the named shared memory object does not exist. [EEXIST] O_CREAT and O_EXCL are specified and the named shared memory object dies exist. [EACCES] The required permissions (for reading or reading and writing) are denied. The following errors are defined for shm_unlink(): [EFAULT] The path argument points outside the process’ allocated address space. [ENAMETOOLONG] The entire pathname exceeded 1023 characters. [ENOENT] The named shared memory object does not exist. [EACCES] The required permissions are denied. shm_unlink() requires write permission to the shared memory object. SEE ALSO close(2), ftruncate(2), fstat(2), mmap(2), munmap(2) STANDARDS The shm_open() and shm_unlink() functions are believed to conform to IEEE Std 1003.1b-1993 (“POSIX.1”). HISTORY The shm_open() and shm_unlink() functions first appeared in FreeBSD 4.3. The functions were reimplemented as system calls using shared memory objects directly rather than files in FreeBSD 7.0. AUTHORS Garrett A. Wollman 〈wollman@FreeBSD.org〉 (C library support and this manual page) Matthew Dillon 〈dillon@FreeBSD.org〉 (MAP_NOSYNC)
http://manpages.ubuntu.com/manpages/lucid/man2/shm_open.2freebsd.html
CC-MAIN-2015-06
refinedweb
484
58.18
You need what are called 'command line arguments'. Dive into python has a great section explaining command line arguments. But in essence you could do something like this: import sys if sys.argv: print "The first command line argument i recieved was:",sys.argv[0] Then we would run our code like this: python filename.py ThisIsTheArgument And the result would print the following out The first command line argument i recieved was: ThisIsTheArgument So i'm sure you can see that instead of writing 'ThisIsTheArgument' that you could put a path to a file there instead and then do things to it in your program. Anyway, have a look at the link above, it'll help you out :) Edited 6 Years Ago by Paul Thompson: n/a Good answer but permit me to suggest a minor change to the above example. #!/usr/bin/env python #argecho.py import sys if sys.argv: print "The name of this program is:",sys.argv[0] print "The first command line argument I received is:",sys.argv[1] print "The second command line argument I received is:",sys.argv[2] Ok, but I need to do this for an arbitrary number of file names. Is there any way to completely remove the first element in the array and make the file path the first element? Create an empty list and append what you want from sys.argv into your list. import sys # there is a commandline if len(sys.argv) > 1: arglist = [] # sys.argv[0] is the program filename, slice it off for arg in sys.argv[1:]: #Slice of sys.argv starting at sys.argv[1] up to and including the end arglist.append(arg) else: print "usage %s arg1 arg2 [arg3 ...]" % sys.argv[0] sys.exit(1) # if the arguments were This.txt That.txt Other.log # arglist should be ['This.txt', 'That.txt', 'Other.log'] print(arg
https://www.daniweb.com/programming/software-development/threads/259422/specify-file-path-from-command-line
CC-MAIN-2016-50
refinedweb
317
77.33
> hash.zip > hash.h /* +++Date last modified: 05-Jul-1997 */ #ifndef HASH__H #define HASH__H #include /* For size; } hash_table; /* ** This is used to construct the table. If it doesn't succeed, it sets ** the table's size to 0, and the pointer to the table to NULL. */ hash_table *construct_table(hash_table *table,size_t size); /* ** Inserts a pointer to 'data' in the table, with a copy of 'key' as its ** key. Note that this makes a copy of the key, but NOT of the ** associated data. */ void *insert(char *key,void *data,struct hash_table *table); /* ** Returns a pointer to the data associated with a key. If the key has ** not been inserted in the table, returns NULL. */ void *lookup(char *key,struct hash_table *table); /* ** Deletes an entry from the table. Returns a pointer to the data that ** was associated with the key so the calling code can dispose of it ** properly. */ void *del(char *key,struct hash_table *table); /* ** Goes through a hash table and calls the function passed to it ** for each node that has been inserted. The function is passed ** a pointer to the key, and a pointer to the data associated ** with it. */ void enumerate(struct hash_table *table,void (*func)(char *,void *)); /* **_table(hash_table *table, void (*func)(void *)); #endif /* HASH__H */
http://read.pudn.com/downloads/sourcecode/math/1609/hash.h__.htm
crawl-002
refinedweb
209
70.94
go to bug id or search bugs for Description: ------------ I built ibm_db2 for i5OS on AIX. However, two declaration errors occurred by the make command. - build error 1 .../ibm_db2.c: In function '_php_db2_connect_helper': /home/ushida5035/temp/ibm_db2-1.9.2/ibm_db2- 1.9.2/ibm_db2.c:2109: error: 'conn_handle' has no member named 'c_i5_allow_commit' - build error 2 .../ibm_db2.c: In function '_ibm_db_chaining_flag': /home/ushida5035/temp/ibm_db2-1.9.2/ibm_db2- 1.9.2/ibm_db2.c:6761: error: 'SQL_ATTR_CHAINING_BEGIN' undeclared (first use in this function) I added the following two codes and solved this. If this information is helpful, it is glad. Reproduce code: --------------- build error 1 in the case I added to "ibm_db2.c" file as follows. 114 line long c_i5_allow_commit; build error 2 in the case I added to "php_ibm_db2.h" file as follows. line 302 #ifdef PASE /* i5/OS ease of use turn off/on */ #ifndef SQL_ATTR_CHAINING_BEGIN- #define SQL_ATTR_CHAINING_BEGIN 2464 #define SQL_ATTR_CHAINING_END 2465 #define SQL_IS_POINTER -4 #endif- #endif /* PASE */ Add a Patch Add a Pull Request I corrected summary issue has been fixed in ibm_db2-1.9.3.
https://bugs.php.net/bug.php?id=59973
CC-MAIN-2019-22
refinedweb
180
51.55
) #Unit testing & web2py Officially, [web2py]() [recommends]() [using]() [doctests]() [to]() [test]() [your]() [controllers](). Doctest, however, [is not always the ideal way]() to test your code. Doctests are especially inept at handling database-driven controllers, where it's important to return the database to a known state before running each test. After a [long discussion]() on the mailing list this article was created to provide a clear explanation of how to do unit testing in web2py using Python's `unittest` module. A thorough introduction to the `unittest` module can be found [here](). ## How to write unit tests for web2py projects Let's look at a sample unit test script, then break it down to understand what it's doing. The purpose of this article is to demonstrate how to use Python's `unittest` module with web2py projects. Unlike other such examples on the Internet, this one shows how to test controllers that interact with a database. ### Example test suite: test.py import unittest from gluon.globals import Request execfile("applications/api/controllers/10.py", globals()) db(db.game.id>0).delete() # Clear the database db.commit() class TestListActiveGames(unittest.TestCase): def setUp(self): request = Request() # Use a clean Request object def testListActiveGames(self): # Set variables for the test function request.post_vars["game_id"] = 1 request.post_vars["username"] = "spiffytech" resp = list_active_games() db.commit() self.assertEquals(0, len(resp["games"])) suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(TestListActiveGames)) unittest.TextTestRunner(verbosity=2).run(suite) ### How to run the test script Before we continue, you should know how to execute this script: `python web2py.py -S api -M -R applications/api/tests/test.py` Fill in the name of your own application after `-S`, and the location of your test script. We use web2py.py to call our script because it sets up the operating environment for us; it brings in our database and gives us all of the variables that are normally passed into the controller, like `request`. ### How it works Let's break down the above example: import unittest from gluon.globals import Request # So we can reset the request for each test The first line, predictably, imports the `unittest` module. The second line imports web2py's Request object. We want this to be available so we can use a fresh, clean, unmodified Request object in every test. execfile("applications/api/controllers/10.py", globals()) Just like in the web2py shell, unit test scripts don't automatically have access to your controllers. This line executes your controller file, bringing all of the function declarations into the local namespace. Passing `globals()` to the `execfile()` command lets your controllers see your database. db(db.game.id>0).delete() # Clear the database db.commit() Unit testing with a database is only useful if the database looks the same when your tests run. These lines empty the database. **In your unit tests, you must run `db.commit()` in order for any `db.update()`, `db.insert()`, or `db.delete()` commands to take effect**. web2py automatically runs `db.commit()` when a controller's function finishes, which is why you don't usually have to do it yourself. Not so in external scripts. You must also run `db.commit()` after calling any controller function that changes the database. There is no harm in calling `db.commit()` after all controller functions, just to be safe. class TestListActiveGames(unittest.TestCase): def setUp(self): Unit test suites are composed of classes whose names start with "Test". Each class has it's own `setUp()` function which is run before each test. You can use the `setUp()` function to set up any variables or conditions you need for every test in the class. request = Request() # Use a clean Request object It's important to clean up your mess between tests. In our simple example, the only thing in the operating environment we're changing is the global `request` object each controller function sees and works with. def testListActiveGames(self): # Set variables for the test function The `unittest` module will run any function whose name starts with 'test'. Here, we've given our test function a name that describes what it's testing. request.post_vars["game_id"] = 1 request.post_vars["username"] = "spiffytech" These lines set up the variables needed by the function we're testing. `post_vars` is a dictionary that, in your controller, contains the values a user's browsers sent via POST. The controller function we're testing expects to see POST values, so we set them up. resp = list_active_games() db.commit() self.assertEquals(0, len(resp["games"])) Now we actually test something! `list_active_games()` is a function in my controller. The function returns a dict of values, just like most web2py controller functions. I've captured the dict in a variable named `resp`, short for "response". It doesn't matter what you name the variable, as long as the name is meaningful. The second line commits any changes to the database made by `list_active_games()`. The third line represents the heart of unit testing: making sure the output from a function is what we expect it to be. Since our test class, `TestListActiveGames`, is derived from `unittest.TestCase` the `self` object has a number of functions for testing values. Here, we use the basic `assertEquals()` function which, just like it sounds, checks that two values match. Unlike Python's regular `assert()` function, `assertEquals()` (and other `unittest` assert functions) prints useful information to the command line when assertions fail. suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(TestListActiveGames)) unittest.TextTestRunner(verbosity=2).run(suite) These last few lines get the unit tests started. We define a group (or "suite") of unit tests, then add to the suite all of our test classes. Here, we only have one test class, but if we had more, you'd simply repeat the second line with each classes name. If you've used the `unittest` module before and wonder why we're not simply calling `unittest.main()`, see the _Background_ section below. # One step further: Using a special testing database The big problem with the above example is that it works on the database you're using to develop your application. Anything your tests do to the database (including the big fat "delete all" near the top of the script) will affect your development site. To fix this, we need to tell web2py to create a copy of our database that we can safely use for testing. It's pretty simple: _Append this code to the bottom of db.py_ # Create a test database that's laid out just like the "real" database import copy test_db = DAL('sqlite://testing.sqlite') # Name and location of the test DB file for tablename in db.tables: # Copy tables! table_copy = [copy.copy(f) for f in db[tablename]] test_db.define_table(tablename, *table_copy) _Modify test.py to use the test DB_ ... from gluon.globals import Request db = test_db # Rename the test database so that functions will use it instead of the real database ... # Background: for people who already use the `unittest` module ## How `unittest` normally works Unit tests are normally stored in a standalone Python script. The script imports the `unittest` module, some tests are defined, and a couple lines at the bottom of the file run the tests when the script is executed. Here's an example: import unittest import myprogram # Import the code you want these unit tests to test class TestStuff(unittest.TestCase): def setUp(self): self.something = 5 def testSomeFunction(self): result = myprogram.somefunction(self.something) self.assertEquals(result, 10) result = myprogram.somefunction(result) self.assertEquals(result, 20) # Run the unit tests if the script is being executed from the command line if __name__ == "__main__": unittest.main() This script would be called from the command line like so: `python my_tests.py` ## Why this doesn't work with web2py > Explicit is better than implicit. > Flat is better than nested. > Namespaces are one honking great idea -- let's do more of those! > _- Excerpted from the Zen of Python_ web2py forgoes some staples of Pythonic programing philosophy in favor of being easy to teach and easy to use use. Some people think this winds results in "magic". Rather than treating each .py file as a module which is imported (a la Django), web2py sets up a ready-to-use environment behind the scenes before giving control over to the web developer. The developer sees their database and built-in web2py functions magically available in the controller, and rejoices. This is convenient when developing web applications, but causes problems for external scripts. The normal command to run unit tests, `unittest.main()`, gets confused because it's run in the scope of web2py.py, rather than in the scope of a standalone script. `unittest.main()` also gets confused because web2py.py passes all of the command line arguments to test.py, and the `unittest` module doesn't know what to do with web2py.py's command line flags. We have to do a few things different to get unit testing working with web2py: 1. Set up the web2py operating environment 2. Bring in our controllers 3. Change the way test suites are executed Steps 1 and 2 are mandatory, and step 3 is a byproduct of the way I chose to solve 1 and 2. [AlterEgo 213]() shows a different way to set up the environment for unit tests than the way I describe. In AlterEgo 213, the test script handles the setup of the whole web2py environment. However, this clutters up the code with lots of stuff that you can delegate to web2py.py instead. The tradeoff is that you lose the ability to specify what test to run from the command line. **Note that AlterEgo 213 is incomplete. Several changes must be made to it in order for your controllers to see your database. This complicates the code more than I cared for.* © 2008-2010 by Massimo Di Pierro - All rights reserved - Powered by web2py - design derived from a theme by the earlybird The content of this book is released under the Artistic License 2.0 - Modified content cannot be reproduced.
http://web2py.com/AlterEgo/default/edit/260
CC-MAIN-2013-48
refinedweb
1,674
67.15
CSGraph stands for Compressed Sparse Graph, which focuses on Fast graph algorithms based on sparse matrix representations. To begin with, let us understand what a sparse graph is and how it helps in graph representations. A graph is just a collection of nodes, which have links between them. Graphs can represent nearly anything − social network connections, where each node is a person and is connected to acquaintances; images, where each node is a pixel and is connected to neighbouring pixels; points in a high-dimensional distribution, where each node is connected to its nearest neighbours and practically anything else you can imagine. One very efficient way to represent graph data is in a sparse matrix: let us call it G. The matrix G is of size N x N, and G[i, j] gives the value of the connection between node ‘i' and node ‘j’. A sparse graph contains mostly zeros − that is, most nodes have only a few connections. This property turns out to be true in most cases of interest. The creation of the sparse graph submodule was motivated by several algorithms used in scikit-learn that included the following − Isomap − A manifold learning algorithm, which requires finding the shortest paths in a graph. Hierarchical clustering − A clustering algorithm based on a minimum spanning tree. Spectral Decomposition − A projection algorithm based on sparse graph laplacians. As a concrete example, imagine that we would like to represent the following undirected graph − This graph has three nodes, where node 0 and 1 are connected by an edge of weight 2, and nodes 0 and 2 are connected by an edge of weight 1. We can construct the dense, masked and sparse representations as shown in the following example,) print G_sparse.data The above program will generate the following output. array([2, 1, 2, 1]) This is identical to the previous graph, except nodes 0 and 2 are connected by an edge of zero weight. In this case, the dense representation above leads to ambiguities − how can non-edges be represented, if zero is a meaningful value. In this case, either a masked or a sparse representation must be used to eliminate the ambiguity. Let us consider the following example. from scipy.sparse.csgraph import csgraph_from_dense G2_data = np.array ([ [np.inf, 2, 0 ], [2, np.inf, np.inf], [0, np.inf, np.inf] ]) G2_sparse = csgraph_from_dense(G2_data, null_value=np.inf) print G2_sparse.data The above program will generate the following output. array([ 2., 0., 2., 0.])
https://www.tutorialspoint.com/python_data_science/python_graph_data.htm
CC-MAIN-2020-10
refinedweb
414
54.73
Animate allowed to call draw()? I'm still getting into some of the more advanced features of pythonista...and I was wondering what was legal/illegal in the ui.animate() function. How does it work, exactly? Underneath the hood, is it creating an objective-C block to call in some other thread from the UI and main threads? If so, is it legal to call draw() from within a method on a widget which is called from animate()? So if I have a setup like: class MyWidget(ui.View): ... def animatedDrawAction(self, sender): def _draw(): ...set some internal state while animated... self.draw() # redraw to reflect that new state ui.animate(_draw, 1.0) is that allowed? Or is calling draw() going to do bad things if it isn't within the main thread? Should I instead call set_needs_display() within the animation and the UI thread will hopefully update as the animation runs? Or am I completely missing the point here? @shinyformica, Scripter also uses update()under the hood, and was created to avoid writing same kind of boilerplate animation controlling logic there over and over again. @shinyformica, as to your example, Scripter contains a pulse effect, and supports cancelling a running animation if you want to restart it (but alas, no simple ”restart” option for now). @shinyformica Not sure if that helps, little dirty script to perform animation in an independant thread for each control. You can even tap both with two fingers at the same moment. Sorry if not useless import ui import threading import time class my_thread(threading.Thread): def __init__(self,ctrl): threading.Thread.__init__(self) self.ctrl = ctrl self.delay = 0.05 self.delta = 0.0 def run(self): self.on = True while self.delay > 0: time.sleep(self.delay) if self.on: self.ctrl.background_color = (1,0,0) else: self.ctrl.background_color = 'white' self.on = not self.on self.delay = self.delay + self.delta # same or slower if self.delay >= 1: self.delay = 0 self.ctrl.background_color = 'white' class my_ctrl(ui.View): def __init__(self,frame=None,name=None): self.frame = frame self.name = name self.ctrl = ui.Label(frame = (0,0,self.width,self.height)) self.ctrl.background_color = 'lightgray' self.ctrl.text = 'tap here' self.add_subview(self.ctrl) self.server_thread = my_thread(self.ctrl) self.server_thread.lock = threading.Lock() def touch_began(self,touch): self.server_thread.lock.acquire() if not self.server_thread.is_alive(): self.server_thread.start() else: self.server_thread.delay = 0.05 self.server_thread.delta = 0.0 self.server_thread.lock.release() def touch_ended(self,touch): self.server_thread.lock.acquire() self.server_thread.delta = 0.05 self.server_thread.lock.release() v = ui.View() ctrl1 = my_ctrl(frame=(10, 10,100,30),name='ctrl1') v.add_subview(ctrl1) ctrl2 = my_ctrl(frame=(10,190,100,30),name='ctrl2') v.add_subview(ctrl2) v.frame = (0,0,300,400) v.present('sheet') Thanks for the pointers for Scripter, I have that code open for reference :) @mikael. @cvp this is cool, though for the moment I'm just having the control's update() with update_interval set to non-zero do what I need...which is working well enough. In the above code example, I notice you are never calling set_needs_display() in either the thread changing the color or the main code. Does that mean that under the hood, the actual modification of certain attributes automatically causes the control to get redrawn? Is there a list of which attrs will cause that? Or is it basically any attribute which might affect appearance? (color, tint, font, etc.) @shinyformica the doc says View.set_needs_display() Marks the view as needing to be redrawn. This usually only makes sense for custom View subclasses that implement a draw method. I never use it if I modify ui elements in another thread Good point @cvp...neither do I, I just expect changing an attribute which affects appearance to cause the control to update. My own custom Views call self.set_needs_display() in whatever property setters or methods change appearance. (and look at you, being careful with your thread locks...even in a write-only/read-only case! :)) You only need to call it off you implement draw. Other display attributes get updated automatically. @jonb totally right, I was only really thinking about my own custom view, I was aware that changing attributes on regular controls caused them to update. For my purposes, I was looking for the "right" way to have an animated custom view react to user interaction. a) calling draw() directly is never done, since it has no meaning outside a scheduled drawing context b) calling set_needs_display() is allowed from anywhere, since the actual drawing will be scheduled and executed on the UI thread c) there is no "other thread" which can do drawing/updating of UI elements so, basically however I go about it, just make sure my actual draw() is as efficient as possible so the UI never gets bogged down. This is python, of course, so the GIL is still in play, and all threads have to finish their work as quickly as possible, or yield control, in order to not cause trouble. Actually, I should ask: under the hood, there is only one interpreter instance, right? The python code on the UI thread and the python code on the main thread are both being executed by the same interpreter process? There's no magical communication happening between multiple python interpreters which side-steps the GIL as you can with multiprocessing? There is one interpreter for python 2.7 and one for 3.6. but just one process. Depending on what you are doing there are ways to keep draw quick: Drawing an image may be faster than stroking hundreds of paths. You can render static things to an ImageContext, then later draw that inside draw. For instance omz's sketch example does that every touch_ended. For framebuffer type access, see the recent thread which dealt with both some real time audio generation and IOSurfaceWrapper enabling some low level image pixel manipulation and display
https://forum.omz-software.com/topic/5231/animate-allowed-to-call-draw/19
CC-MAIN-2022-40
refinedweb
997
60.41
TypeScript Anonymous Functions Anonymous Functions: - The TypeScript language has the concept of anonymous functions or methods. - An anonymous function has not a specific/particular name. - Anonymous functions are declared as unnamed or unknown functions. - An anonymous function is must be assigned to a variable at its declaration time. - Anonymous functions can be void or any other type. Example: /* An anonymous function declaration which has implicitly void as its return type and 1 string type parameter. This unknown function is assigned to a local variable. */ var calculatorType = function (name: string) { // print output document.write("Calculator type: " + name + " <br/>"); }; /* Multiple anonymous functions declaration, each function has explicitly void as its return type, 2 number type parameters and different name. Function name and its parameter-list are collectively called function signature. These functions are assigned to a local variable. */ var add = function (v1: number, v2: number): void { document.write("Addition: " + (v1 + v2) + " <br/>"); }; var substract = function (v1: number, v2: number): void { document.write("Substraction: " + (v1 - v2) + " <br/>"); }; var multiply = function (v1: number, v2: number): void { document.write("Multiplication: " + (v1 * v2) + " <br/>"); }; /* An anonymous function which has number as its return type and assigned to a local variable. */ var divide = function (v1: number, v2: number): number { return (v1 / v2); }; /* Above functions are called. Each function is called through its name. Notice that, 2 numbers add(10, 2) are passed from each function, these numbers are called argument(s) and must be matched with the type of parameters, If they not matched the TypeScript compiler will generates a compile-time error. */ calculatorType("Normal calculator"); add(10,2) substract(10, 2); multiply(10, 2); document.write("Division: " + divide(10, 2));
https://tutorialstown.com/typescript-anonymous-functions/
CC-MAIN-2018-43
refinedweb
274
50.12
This is exactly what I said in my my previous post would be the critical problem with the closure proposal. So what do we get instead if existing APIs can't be changed? This is where it starts to become terrifying. Yes ladies and gentlemen we get the methods added to Collections accessible for a static import. I mean since when did Java become a functional programming language? Neal presents the example as follows with the initial Java code being: Droid seekDroid(Map map) { for (Map.Entry entry : map.entrySet()) { String droidName = entry.getKey(); Droid droid = entry.getValue(); if (droid.isSought()) { System.out.printf("'%s' is the droid we seek.%n", droidName); return droid; } } } This is fairly standard stuff made slightly better by the Java 5 for loop. Now to "optimize" this with the proposed closures we have the introduction of a new keyword specifically for looping in the for keyword: Droid seekDroid(Map map) { for eachEntry(String droidName, Droid droid : map) { if (droid.isSought()) { System.out.printf("'%s' is the droid we seek.%n", droidName); return droid; } } } I mean how is this this better? There is hardly any difference in terms of LOC or code verbosity to the previous example. Not only that whoever thought closures were specially for looping has clearly lost the point. If you have to modify the language specifically to support one use case of closures that should be big ugly red flag waving. Clearly supporting backwards compatability is going to be a huge stumbling block here and I fear what we will end up with is a botch job. Now just for comparison sake contrast the above example to the equivalent Groovy code: def seekDroid ( map ) { def droid = map.find { droid -> droid.value?.isSought () } if (droid) printf ( "'%s' is the droid we seek.%n", droid.value.name ) return droid } Enough said really. Languages like Groovy and Ruby have closures at their core and they are not bolted on as an after thought. If closures are not embraced this way they will become more of an additional burden for the programmer to learn than something core to the language. Needless to say the horror story doesn't end here, however, as let's take a look at the method that provides this feature: public interface Block2 { void invoke(K k, V v) throws E; } public static void for eachEntry(Map map, Block2 block) throws E { for (Map.Entry entry : map.entrySet()) { block.invoke(entry.getKey(), entry.getValue()); } } What can I say? That's about as clear as mud. Generics are bad enough as it is, but that is about as readable as a newspaper delivered in the rain. Implementing this method in Groovy would go something like this: static eachEntry(Map map, Closure closure) { for(entry in map.entrySet()) { closure.call(entry.key, entry.value) } } In my opinion, Sun need to carefully consider what they are doing here and whether it will cause more harm than good. It would be of more value for them to embrace dynamc languages like Ruby (which they are doing with JRuby) and Groovy than to bolt on something that has the bad smell about it that the current proposal does. They should be focusing on more important issues to Java such as decent desktop integration with Swing (which still fails to look and behave natively), shared VM on windows and other such well documented problems that never seem to get solved. My 2c. 15 comments: I tend to agree; however I *am* interested in potential JVM changes that make it easier to compile closures in JRuby and Groovy. I think evolving the Java language is unnecessary...don't buy into the C# myth that every new release has to be completely freaking different from the previous one. Java is fine for what it's good for...let's not try to make it something it's not. "existing APIs won't be able to be changed to accomodate closures." Not true, he mentioned adding methods to Collections to support looping etc. I already have my own Collections class that has 'all', 'any', and other such algorithms, in Java 5.0. Java might have done alright so far without closures, but it might lose people to more sophisticated languages. Comparing Java syntax to Ruby and Python is a little unfair, as those are dynamically typed languages (read: can't be refactored). Try comparing it to Haskell instead, a much stronger typed language than even Java is. What's wrong with static methods? If you want to be able to swap out your static 'Collections.all' implementation, you can, and guess what, closures can help with this too! interface All { boolean all(Iterable iterable,Test test); } All all=(Iterable iterable,Test test){Collections.all(iterable,test);}; All all2=some other implementation. If existing APIs can't be retrofitted to support closures, then autoboxing can do it (as I just demonstrated with All all=something). I think the syntax for the custom loop looks awful, but that's not the fault of closures. That looks like a bit of a fringe idea, maybe Neal is putting a pet thing out there to see whether many people would like it. Personally, without tail recursion optimisation, I think Iterable is the best that we'll get. Pure functional iteration (recursion for loops) isn't really doable in Java until there is a guarantee that it will be optimised to iteration. Given Neal's example, I'd rather compose functions. return Collections.find(map,isSought); @Charles Now this I agree with. Hooks into the JVM - YES! Features to the Java language - no way. @Ricky You hit the nail on the head in your post really. Java is strongly typed and hence maybe it is not suited to closures at all? Closures are being added to C# because they are adding dynamic typing as well. Without this I fear the syntax and user experience will be horrific. What's wrong with static methods? Well in Java they aren't polymorphic for one which means you have to use all that horrific generics crap to code them. And I would hope that we have left the world of functional global methods decades ago. I agree that the problem of how to use closures with collections is key to whether they should be added to Java. I wrote about extension methods a while back - - and I believe a solution along those lines is an essential complement to closures. "You hit the nail on the head in your post really. Java is strongly typed and hence maybe it is not suited to closures at all?" I'd love to take that as a compliment, but I can't. Haskell supports closures, and is much more strongly-typed than Java. Haskell's syntax is certainly 'different'. In some cases it looks a lot cleaner than Java, intuitively, in others it doesn't. Some of the syntax proposed for closures is quite attractive; some isn't. I hope the simplest cases remain attractive, e.g.: invokeLater{frame.setVisible(true);} "What's wrong with static methods? Well in Java they aren't polymorphic for one which means you have to use all that horrific generics crap to code them." I'm not sure why you would need generics to write a static method; I think I must be missing something here. Of course static methods aren't polymorphic, that's fairly obvious. That shows their simplicity. If you only ever want one implementation of, say, Math.abs, then Math.abs(5) is pretty much the simplest way of doing it. Defining an interface for it and providing a factory is certainly possible, but in that case it gains you nothing, and, most importantly, you still end up with something static, e.g., MathFactory.createBogStandardAbsImplementation(). If you provide something simple, such as Math.abs, and client code wants to be able to use alternative implementations, then they can wrap it themselves, or possibly the language can help with that with a form of autoboxing (perhaps this is what C# delegates are; I should look sometime). Perhaps I could do something like: Function[Double,Double] abs=Math.abs; rather than: Function[Double,Double] abs=new Function[Double,Double]() { public Double run(Double input) { return Math.abs(input); } }; In other words, there's nothing wrong with static methods, they are just very simple. If you need substitutability, then you have to go the extra half-mile, but that really isn't a problem. Don't be afraid of static, it is the simplest thing that could possibly work, in many situations. Ricky, yes there is nothing wrong with static in certain cases, but how can you compare the Math class with its simple methods to a collections library? With the Math class not having polymorphism is a good thing because you always want these methods to behave the same given how fundamental they are. Unfortunately a collections library is very different. It makes absolutely no sense that collections methods are static at all and has nothing to do with being afraid of static. With collections we have several implementations provided outside JDK collections such as the commons-collections library, the way Hibernate uses Persistent versions to do lazy loading and so on. These collections implementations differ in implementation, but are transparent to the user. This is the essence of OO. I mean, how can you provide a collections feature that uses closures without it being polymorphic? I'm sorry, but Java is an OO language, if you're using static methods in this way you havnt "got" OO. Static methods although useful in many, many cases are not useful in this case and it will cause more harm than good. This: static eachEntry(Map map, Closure closure) { for(entry in map.entrySet()) { closure.call(entry.key, entry.value) } } can be written more simply as: static eachEntry(map, closure) { map.each{key, value -> closure(key, value)} } :) A static method doesn't prevent substitutability, as you can always wrap (interfacify - look, new jargon!). Suppose that Collections.only (a method that filters out everything that fails a test) was not static. You would need something like CollectionsUtility.createTheUsualCollectionsImplementation().only(iterable,closure) Of course, that could go down in size: Collections.stdOnly().only(iterable,closure) I don't see what you gain with that, except that there would be an Only/CanOnly/Onlyable interface too (something declaring an 'only' method), which would serve as an automatic starting point when you wanted to write your own. Otherwise you would be polluting the actual collection interfaces with methods that don't vary between implementations. Of course, they could vary between implementations, and in that case, interfacifying 'only' would be useful, but that doesn't mean that it should be disallowed to have a static implementation of only somewhere. "These collections implementations differ in implementation, but are transparent to the user. This is the essence of OO." Not really. Encapsulation has been around since before objects. OO is about grouping related capabilities. "I mean, how can you provide a collections feature that uses closures without it being polymorphic? I'm sorry, but Java is an OO language, if you're using static methods in this way you havnt "got" OO." I almost assume this question is not a technical one. I could implement a Collections.only as a static method, fairly simply. Java is an OO language, but not everything needs to be an object. What you're talking about suggests that a single method such as Collections.only needs to be its own object. (if clients aren't using it, and you ALWAYS need substitutability). At some point the non-static breaks down and you need to get at a real implementation. Suppose I have a method that uses a static method, that is a hard coupling. Interfacify it, provide it through IoC (someContainer.getImplementationOfOnly()), and you don't have the hard coupling to the implementation anymore. Brilliant! Or is it? If the implementation is the only conceivable/existing implementation, then you haven't gained anything. You still have the hard coupling, it is just spread out, until someone comes up with a useful alternative implementation, which is when the IoC approach becomes useful. I'm not sure if it is even worth responding to this last comment, but here we go. "A static method doesn't prevent substitutability, as you can always wrap (interfacify - look, new jargon!)." Yes you could do this, in fact it is called the factory pattern. However, we're not talking about something a user implements. We're talking about core APIs here. This is not an appropriate or elegant solution for core APIs, which should always be interface and object based "Java is an OO language, but not everything needs to be an object. What you're talking about suggests that a single method such as Collections.only needs to be its own object." Absolutely, but core APIs do. " Errr.. this just clarifies my point that you clearly havn't got it Ricky. You can't practise agile on core APIs. Waving the magic "refactor" wand does'nt work when dealing with JDK APIs otherwise the Calendar classes would have died a horrible death years ago. Once a JDK API has been introduced the whole backward compatability conundrum comes into play. We're discussing the implementation and provision of new JDK methods to support closures. My point is that these cannot be statc as we're not living in "agile la la land" where core APIs can be refactored whenever you choose. Core APIs like java.util.* have to be interface based and they have to be objects simple as that. Otherwise, how are framework developers supposed to provide alternative implementations? The factory pattern is not a solution. In fact we've already seen that with JNDI being the failure that it is and the development of the @Resource annotation. Dependency Injection (as opposed to IoC) is the way forward and this relies in interfaces and polymorphism. I'm not understanding how you don't comprehend this simple statement. The current proposal in Neils blog proposes adding static methods to core APIs. My argument is not that a usr can do this, but that they should not be implemented this way in core APIs. Otherwise there is absolutely no point in adding closures. I realised you are tiring of this being on your blog, so I posted a 'reply' on mine. Where does static really fit? "Dependency Injection (as opposed to IoC)" I wondered about that, I've not read a whole lot about either term, but I believe I use dependency injection, where I want substitutability. I looked at the wikipedia entry for both terms, and it seems to suggest that DI is a subset of IoC (contradicting you). Also, the IoC page says "This article or section is in need of attention from an expert on the subject." - maybe you can help. Graeme: "You hit the nail on the head in your post really. Java is strongly typed and hence maybe it is not suited to closures at all?" Many statically typed languages offer closures. There is nothing incompatible between closures and static typing. "Closures are being added to C# because they are adding dynamic typing as well. Without this I fear the syntax and user experience will be horrific." C# is not adding dynamic typing, they're adding static type inference. Totally different animal. Please check your facts next time, otherwise you come off as just another ignorant Java programmer. Thanks for the clarification regarding C# slava. Admittedly, C# is not my strong point :-) Hi .nice blog.I need to find jobs .can anybody send links of that job websites.... Thank you......
http://graemerocher.blogspot.com/2006/10/closures-and-java-horror-story-begins.html
CC-MAIN-2018-26
refinedweb
2,630
65.73
FAQs Search Recent Topics Flagged Topics Hot Topics Best Topics Register / Login Win a copy of Spark in Action this week in the Open Source Projects forum! greg jiang Greenhorn 5 1 Threads 0 Cows since May 08,reg jiang who can help me in a example I have got a clue.If I use <servlet-mapping> map my class into a /servlet/myclass model and use the alias in browser, the result is right.but the book did say this,I still wanna know if this is difference bettween tomcat 3.0 to tomcat 3.2.3(jbuilder6 used)? show more 18 years ago Web Component Certification (OCEJWCD) who can help me in a example The example is in the 2.7chapter in core servlet/jsp.I think it is right but I could not get the result.Maybe it is a problem in configing the tomcat.I use jbuilder6 enterprise edition.so it use tomcat as a web server.the web.xml file is the <default webapp>/deployment discriptors/web.xml.I hope someone who are familior the configure of jbuilder or tomcat could help me.thank a lot. the procedure init package coreservlets; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; /** Example using servlet initialization. Here, the message * to print and the number of times the message should be * repeated is taken from the init parameters. */ public class ShowMessage extends HttpServlet { private String message; private String\n" + "<H1 ALIGN=CENTER>" + title + "</H1>"); for(int i=0; i<repeats; i++) { out.println(message + "<BR>"); } out.println("</BODY></HTML>"); } } web.xml <web-app> <servlet> <servlet-name> ShowMsg </servlet-name> <servlet-class> coreservlets.ShowMessage </servlet-class> <init-param> <param-name> message </param-name> <param-value> Shibboleth </param-value> </init-param> <init-param> <param-name> repeats </param-name> <param-value> 5 </param-value> </init-param> </servlet> </web-app> show more 18 years ago Web Component Certification (OCEJWCD) who can help me in a example thanks Carl I just use ServletConfig.getInitParameter in a servlet init(ServletConfig config)to get a init value of a servlet.I config web.xml and add<servletname><servletclass>and so on,just like the Core-Servlets-JSP said. but I could not get the param like the book said.I do not know how to do. show more 18 years ago Web Component Certification (OCEJWCD) who can help me in a example I practic the sample in core servlets/jsp using init(config)to get param from web server.but I have set the web.xml,I could not get parma through config.getInitParameter in servlet.Maybe this example in core and more servlets/jsp.Who have done this successful?I use jbuilder6 so the web server is tomcat.who can halp me ,thanks. show more 18 years ago Web Component Certification (OCEJWCD) Passed SCWCD - Thanks JavaRanch congradulation Durga,by the way professional jsp could down online? where?thanks show more 18 years ago Web Component Certification (OCEJWCD)
https://coderanch.com/u/29709/greg-jiang
CC-MAIN-2020-34
refinedweb
494
59.9
Created attachment 53121 [details] Test-case extracted from the generic blake2b kernel code Gcc-12 seems to generate a huge number of stack spills on this blake2b test-case, to the point where it overflows the allowable kernel stack on 32-bit x86. This crypto thing has two 128-byte buffers, so a stack frame a bit larger than 256 is expected when the dataset doesn't fit in the register set. Just as an example, on this code, clang-.14.0.0 generates a stack frame that is 296 bytes. In contrast, gcc-12.1.1 generates a stack frame that is almost an order of magnitude(!) larger, at 2620 bytes. The trivial Makefile I used for this test-case is # The kernel cannot just randomly use FP/MMX/AVX CFLAGS := -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx CFLAGS += -m32 CFLAGS += -O2 test: gcc $(CFLAGS) -Wall -S blake2b.c grep "sub.*%[er]sp" blake2b.s to easily test different flags and the end result, but as can be seen from above, it really doesn't need any special flags except the ones that disable MMX/AVX code generation. And the generated code looks perfectly regular, except for the fact that it uses almost 3kB of stack space. Note that "-m32" is required to trigger this - the 64-bit case does much better, presumably because it has more registers and this needs fewer spills. It gets worse with some added debug flags we use in the kernel, but not that kind of "order of magnitude" worse. Using -O1 or -Os makes no real difference. This is presumably due to some newly triggered optimization in gcc-12, but I can't even begin to guess at what we'd need to disable (or enable) to avoid this horrendous stack growth. Some very aggressive instruction scheduling thing that spreads out all the calculations and always wants to spill-and-reload the subepxressions that it CSE'd? I dunno. Pls advice. The excessive stack literally causes build failures due to us using -Werror-frame-larger-than= to make sure stack use remains sanely bounded. The kernel stack is a rather limited resource. Side note: it might be best to clarify that this is a regression specific to gcc-12. Gcc 11.3 doesn't have the problem, and generates code for this same test-case with a stack frame of only 428 bytes. That's still bigger than clang, but not "crazy bigger". thumb1 (which has 16 registers but really only 8 are GPRs) does not have this issue in GCC 12, so I suspect a target specific change caused this. Created attachment 53123 [details] Mindless revert that fixes things for me So hey, since you guys use git now, I thought I might as well just bisect this. Now, I have no idea what the best and most efficient way is to generate only "cc1", so my bisection run was this unholy mess of "just run configure, and then run 'make -j128' for long enough that 'host-x86_64-pc-linux-gnu/gcc/cc1' gets generated, and test that". I'm not proud of that hacky thing, but since gcc documentation is written in sanskrit, and mere mortals can't figure it out, it's the best I could do. And the problem bisects down to 8ea4a34bd0b0a46277b5e077c89cbd86dfb09c48 is the first bad commit commit 8ea4a34bd0b0a46277b5e077c89cbd86dfb09c48 Author: Roger Sayle <roger@nextmovesoftware.com> Date: Sat Mar 5 08:50:45 2022 +0000 PR 104732: Simplify/fix DI mode logic expansion/splitting on -m32. so yes, this seems to be very much specific to the i386 target. And yes, I also verified that reverting that commit on the current master branch solves it for me. Again: this was just a completely mindless bisection, with a "revert to verify" on top of the current trunk, which for me happened to be commit cd02f15f1ae ("xtensa: Improve constant synthesis for both integer and floating-point"). I'm attaching the revert patch I used just so that you can see exactly what I did. I probably shouldn't have actually removed the testsuite entry, but again: ENTIRELY MINDLESS BISECTION RESULT. (In reply to Linus Torvalds from comment #4) > > I'm not proud of that hacky thing, but since gcc documentation is written > in sanskrit, and mere mortals can't figure it out, it's the best I could do. And bu 'sanskrit' I mean 'texi'. I understand that it's how GNU projects are supposed to work, but markdown (or rst) is just *so* much more legible and you really can read it like text. Anyway, that's my excuse for not knowing how to "just generate cc1" for a saner git bisect run. What I did worked, but was just incredibly ugly. There must be some better way gcc developers have when they want to bisect cc1 behavior. Based on that bisect commit, it is also possible to repro this issue in earlier GCCs (11, 10, seems fine on <= 9) purely by taking away the -mno-sseX, which triggers the same splitting as now on gcc-12: Investigating. Adding -mno-stv the stack size reduces from 2612 to 428 (and on godbolt the number of assembler lines reduces from 6952 to 6203). (In reply to Roger Sayle from comment #7) > Investigating. Adding -mno-stv the stack size reduces from 2612 to 428 (and > on godbolt the number of assembler lines reduces from 6952 to 6203). Thanks. Using -mno-stv may well be a good workaround for the kernel for now, except I don't have a clue what it means. I will google it to figure out whether it's appropriate for our use. Looks like STV is "scalar to vector" and it should have been disabled automatically by the -mno-avx flag anyway. And the excessive stack usage was perhaps due to GCC preparing all those stack slots for integer -> vector movement that then never happens exactly because the kernel uses -mno-avx? So if I understand this right, the kernel can indeed just add -mno-stv to work around this. ? (In reply to Roger Sayle from comment #7) > Investigating. Adding -mno-stv the stack size reduces from 2612 to 428 (and > on godbolt the number of assembler lines reduces from 6952 to 6203). So now that I actually tried that, '-mno-stv' does nothing for me. I still see a frame size of 2620. I wonder what the difference in our setups is. I tested with plain gcc-12.1.1 from Fedora 36, maybe you tested with current trunk or something? Anyway, it seems -mno-stv isn't the solution at least for the released version ;( Confirmed it is r12-7502-g8ea4a34bd0b0a462 on our bisect seed. Perhaps for !TARGET_STV || !TARGET_SSE2 we could keep the optabs, but split right away during expansion? Anyway, I think we need to understand what makes it spill that much more, and unfortunately the testcase is too large to find that out easily, I think we should investigate commenting out some rounds. (In reply to Jakub Jelinek from comment #11) > Anyway, I think we need to understand what makes it spill that much more, > and unfortunately the testcase is too large to find that out easily, I think > we should investigate commenting out some rounds. I think you can comment out all the rounds except for one, and still see the effects of this. At least when I try it on godbolt with just ROUND(0) in place and rounds 1-12 deleted, for gcc-11.3 I get a stack frame of 388. And with gcc-12.1 I get a stack frame of 516. Obviously that's not nearly as huge, but it's still a fairly significant expansion and is presumably due to the same basic issue. I used godbolt just because I no longer have 11.3 installed to compare against, so it was the easiest way to verify that the stack expansion is still there. Just to clarify: that's still with -O2 -m32 -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx I'm sure it depends on the flags. The code is still not exactly trivial with just one round in place. It's a cryptographic hash, after all. But when compiling for 64-bit mode it all looks fairly straightforward, it really is that DImode stuff that makes it really ungainly when using -m32. Something simple like this -- -- already exhibits the effect. Furthermore, and this also applies to the full BLAKE2b compression function, if you replace all the xors in the rounds by adds, the stack size goes back to normal. On the other hand, replacing the adds by xors, making the whole round function bitwise ops, does not have any effect on stack size. (In reply to Samuel Neves from comment #13) > Something simple like this -- -- already > exhibits the effect. Yup. That's a much better test-case. I think you should attach it to the gcc bugzilla, I don't know how long godbolt remembers those links. But godbolt here is good for another thing: select clang-14 as the compiler, and you realize how even gcc-11 is actually doing a really really bad job. And I'm not saying that to shame anybody: I think it's more a sign that maybe the issue is actually much deeper, and that the problem already exists in gcc-11, but some small perturbation then just makes an already existing problem much worse in gcc-12. So that commit I bisected to is probably almost entirely unrelated, and it is just the thing that turns up the bad behavior to 11. So we feed DImode rotates into RA which constrains register allocation enough to require spills (all 4 DImode vals are live across the kernel, not even -fschedule-insn can do anything here). I wonder if it ever makes sense to not split wide ops before reload. Though, ix86_rot{l,r}di3_doubleword define_insn_and_split patterns were split only after reload both before and after Roger's change, so somehow whether we emit it as SImode from the beginning or only split it before reload affects the RA decisions. unsigned long long foo (unsigned long long x, int y, unsigned long long z) { x ^= z; return (x << 24) | (x >> (-24 & 63)); } is too simplified, the difference with that is just that we used to emit setting of the DImode pseudo to 0 before setting its halves with xor, while now we don't, so it must be something else. I believe as post-reload splitters the doubleword rotates have been introduced already in PR17886. Rewriting those into pre-reload splitters from post-reload splitters would be certainly possible, I will try that, the question is whether it would cure this and what effects it would have on other code. So, I've tried: --- gcc/config/i386/i386.md.jj 2022-06-13 10:53:26.739290704 +0200 +++ gcc/config/i386/i386.md 2022-06-14 11:09:24.467024047 +0200 @@ -13734,14 +13734,13 @@ ;; shift instructions and a scratch register. (define_insn_and_split "ix86_rotl<dwi>3_doubleword" - [(set (match_operand:<DWI> 0 "register_operand" "=r") - (rotate:<DWI> (match_operand:<DWI> 1 "register_operand" "0") - (match_operand:QI 2 "<shift_immediate_operand>" "<S>"))) - (clobber (reg:CC FLAGS_REG)) - (clobber (match_scratch:DWIH 3 "=&r"))] - "" + [(set (match_operand:<DWI> 0 "register_operand") + (rotate:<DWI> (match_operand:<DWI> 1 "register_operand") + (match_operand:QI 2 "<shift_immediate_operand>"))) + (clobber (reg:CC FLAGS_REG))] + "ix86_pre_reload_split ()" "#" - "reload_completed" + "&& 1" [(set (match_dup 3) (match_dup 4)) (parallel [(set (match_dup 4) @@ -13764,6 +13763,7 @@ (match_dup 6)))) 0))) (clobber (reg:CC FLAGS_REG))])] { + operands[3] = gen_reg_rtx (<MODE>mode); operands[6] = GEN_INT (GET_MODE_BITSIZE (<MODE>mode) - 1); operands[7] = GEN_INT (GET_MODE_BITSIZE (<MODE>mode)); @@ -13771,14 +13771,13 @@ }) (define_insn_and_split "ix86_rotr<dwi>3_doubleword" - [(set (match_operand:<DWI> 0 "register_operand" "=r") - (rotatert:<DWI> (match_operand:<DWI> 1 "register_operand" "0") - (match_operand:QI 2 "<shift_immediate_operand>" "<S>"))) - (clobber (reg:CC FLAGS_REG)) - (clobber (match_scratch:DWIH 3 "=&r"))] - "" + [(set (match_operand:<DWI> 0 "register_operand") + (rotatert:<DWI> (match_operand:<DWI> 1 "register_operand") + (match_operand:QI 2 "<shift_immediate_operand>"))) + (clobber (reg:CC FLAGS_REG))] + "ix86_pre_reload_split ()" "#" - "reload_completed" + "&& 1" [(set (match_dup 3) (match_dup 4)) (parallel [(set (match_dup 4) @@ -13801,6 +13800,7 @@ (match_dup 6)))) 0))) (clobber (reg:CC FLAGS_REG))])] { + operands[3] = gen_reg_rtx (<MODE>mode); operands[6] = GEN_INT (GET_MODE_BITSIZE (<MODE>mode) - 1); operands[7] = GEN_INT (GET_MODE_BITSIZE (<MODE>mode)); On the #c0 test with -O2 -m32 -mno-mmx -mno-sse it makes some difference, but not as much as one would hope for: Numbers from gcc 11.3.1 20220614, 11.3.1 20220614 with the patch, 13.0.0 20220610, 13.0.0 20220614 with the patch: sub on %esp 428 2556 2620 2556 fn size in B 21657 23186 28413 23534 .s lines 6199 3942 7260 4198 So, trunk patched with the above patch results in significantly fewer instructions, but larger (more of them use 32-bit immediates, mostly in form of whatever(%esp) memory source operand). And the stack usage is high. I think the patch is still a good idea, it gives the RA more options, but we should investigate why it consumes so much more stack and results in larger code. Of course, size comparisons of -O2 code aren't the most important, for -O2 it is more important how fast the code is. When comparing -Os -m32 -mno-mmx -mno-sse, the numbers are sub on %esp 412 2564 2620 2564 fn size in B 27535 20508 35036 20416 .s lines 5816 3590 7251 3544 So in the -Os case, the patched functions are both smaller and fewer instructions (significantly so), but compared to gcc 11 still significantly higher stack usage). I checked the other target architectures that are supported by the kernel to see if anything else is affected. Apparently only sparc32 has a similar issue with a frame size of 2280 bytes using gcc-10 or higher, compared to 600 to 800 bytes for gcc-4 through gcc-9. The master branch has been updated by Roger Sayle <sayle@gcc.gnu.org>: commit r13-1239-g3b8794302b52a819ca3ea78238e9b5025d1c56dd Author: Roger Sayle <roger@nextmovesoftware.com> Date: Fri Jun 24 07:15:08 2022 +0100 PR target/105930: Split *xordi3_doubleword after reload on x86. This patch addresses PR target/105930 which is an ia32 stack frame size regression in high-register pressure XOR-rich cryptography functions reported by Linus Torvalds. The underlying problem is once the limited number of registers on the x86 are exhausted, the register allocator has to decide which to spill, where some eviction choices lead to much poorer code, but these consequences are difficult to predict in advance. The patch below, which splits xordi3_doubleword and iordi3_doubleword after reload (instead of before), significantly reduces the amount of spill code and stack frame size, in what might appear to be an arbitrary choice. My explanation of this behaviour is that the mixing of pre-reload split SImode instructions and post-reload split DImode instructions is confusing some of the heuristics used by reload. One might think that splitting early gives the register allocator more freedom to use available registers, but in practice the constraint that double word values occupy consecutive registers (when ultimately used as a DImode value) is the greater constraint. Instead, I believe in this case, the pseudo registers used in memory addressing, appear to be double counted for split SImode instructions compared to unsplit DImode instructions. For the reduced test case in comment #13, this leads to %eax being used to hold the long-lived argument pointer "v", blocking the use of the ax:dx pair for processing double word values. The important lines are at the very top of the assembly output:] 2022-06-24 Roger Sayle <roger@nextmovesoftware.com> Uroš Bizjak <ubizjak@gmail.com> gcc/ChangeLog PR target/105930 * config/i386/i386.md (*<any_or>di3_doubleword): Split after reload. Use rtx_equal_p to avoid creating memory-to-memory moves, and emit NOTE_INSN_DELETED if operand[2] is zero (i.e. with -O0). (In reply to CVS Commits from comment #20) > > One might think > that splitting early gives the register allocator more freedom to > use available registers, but in practice the constraint that double > word values occupy consecutive registers (when ultimately used as a > DImode value) is the greater constraint. Whee. Why does gcc have that constraint, btw? I tried to look at the clang code generation once more, and I don't *think* clang has the same constraint, and maybe that is why it does so much better? Yes, x86 itself inherently has a couple of forced register pairings (notably %edx:%eax for 64-bit multiplication and division), and obviously the whole calling convention requires well-defined pairings, but in the general case it seems to be a mistake to keep DImode values as DImode values and force them to be consecutive registers when used. Maybe I misunderstand. But now that this comes up I have this dim memory of actually having had a discussion like this before on bugzilla, where gcc generated horrible DImode code. >] I just checked the current git -tip, and this does seem to fix the original case too, with the old horrid 2620 bytes of stack frame now being a *much* improved 404 bytes! So your patch - or other changes - does fix it for me, unless I did something wrong in my testing (which is possible). Thanks. I'm not sure what the gcc policy on closing the bug is (and I don't even know if I am allowed), so I'm not marking this closed, but it seems to be fixed as far as I am concerned, and I hope it gets released as a dot-release for the gcc-12 series. (In reply to Linus Torvalds from comment #21) > Whee. > > Why does gcc have that constraint, btw? I tried to look at the clang code > generation once more, and I don't *think* clang has the same constraint, and > maybe that is why it does so much better? Registers in RTL have just a single register number and mode (ok, it has some extra info, but not a set of registers). When it is a pseudo register, that doesn't constrain anything, it is just (reg:DI 175). But when it is a hard register, it still has just a single register number, so there is no way to express through that non-consecutive set of registers, so (reg:DI 4) needs to be di:si pair etc. If the wider registers are narrowed before register allocation, it is just a pair like (reg:SI 123) (reg:SI 256) and it can be allowed anywhere. If we wanted RA to allocate non-consecutive registers, we'd need to represent that differently (say as concatenation of SImode registers), but then it wouldn't be accepted by constraints and predicates of most of the define_insn_and_split patterns. ? I realize that you want to do a lot of the early CSE etc operations at that higher level, but by the time you are actually allocating registers and thinking about spilling them, why is it still a DImode thing? And this now brings back my memory of the earlier similar discussion - it wasn't about DImode code generation, it was about bitfield code generation being horrendous, where gcc was keeping the whole "this is a bitfield" information around for a long time and as a result generating truly horrendous code. When it looked like it instead should just have turned it into a load and shift early, and then doing all the sane optimizations at that level (ie rewriting simple bitfield code to just do loads and shifts generated *much* better code than using bitfields). But this is just my personal curiosity at this point - it looks like Roger Sayle's patch has fixed the immediate problem, so the big issue is solved. And maybe the fact that clang is doing so much better is due to something else entirely - it just _looks_ like it might be this artificial constraint by gcc that makes it do bad register and spill choices. (In reply to Linus Torvalds from comment #23) > > And this now brings back my memory of the earlier similar discussion - it > wasn't about DImode code generation, it was about bitfield code generation > being horrendous, Searching around, it's this one from 2011: not really related to this issue, apart from the superficially similar issue with oddly bad code generation. (In reply to Linus Torvalds from comment #23) > ? This is what is being discussed here. Some possibilities are lower these multi-word operations during expansion from GIMPLE to RTL (after all, the generic code usually does that without anything special needed on the backend side unless one declares the backend can do that better), one counter-argument to that is the x86 STV pass which uses vector operations for 2 word operations when possible and it won't really work when it is lowered during expansion. Another is splitting those before register allocation, which is what some patterns did and what other patterns didn't. Or it can be split after register allocation. My understanding was that Roger tried to change some patterns from splitting after RA to before RA and it didn't improve this testcase, so in the end changed some other patterns from splitting before RA to after RA. The master branch has been updated by Roger Sayle <sayle@gcc.gnu.org>: commit r13-1362-g00193676a5a3e7e50e1fa6646bb5abb5a7b2acbb Author: Roger Sayle <roger@nextmovesoftware.com> Date: Thu Jun 30 11:00:03 2022 +0100 Use xchg for DImode double word rotate by 32 bits with -m32 on x86. This patch was motivated by the investigation of Linus Torvalds' spill heavy cryptography kernels in PR 105930. The <any_rotate>di3 expander handles all rotations by an immediate constant for 1..63 bits with the exception of 32 bits, which FAILs and is then split by the middle-end. This patch makes these 32-bit doubleword rotations consistent with the other DImode rotations during reload, which results in reduced register pressure, fewer instructions and the use of x86's xchg instruction when appropriate. In theory, xchg can be handled by register renaming, but even on micro-architectures where it's implemented by 3 uops (no worse than a three instruction shuffle), avoiding nominating a "temporary" register, reduces user-visible register pressure (and has obvious code size benefits). The effects are best shown with the new testcase: unsigned long long bar(); unsigned long long foo() { unsigned long long x = bar(); return (x>>32) | (x<<32); } for which GCC with -m32 -O2 currently generates: subl $12, %esp call bar addl $12, %esp movl %eax, %ecx movl %edx, %eax movl %ecx, %edx ret but with this patch now generates: subl $12, %esp call bar addl $12, %esp xchgl %edx, %eax ret With this patch, the number of lines of assembly language generated for the blake2b kernel (from the attachment to PR105930) decreases from 5626 to 5404. Although there's an impressive reduction in instruction count, there's no change/reduction in stack frame size. 2022-06-30 Roger Sayle <roger@nextmovesoftware.com> Uroš Bizjak <ubizjak@gmail.com> gcc/ChangeLog * config/i386/i386.md (swap_mode): Rename from *swap<mode> to provide gen_swapsi. (<any_rotate>di3): Handle !TARGET_64BIT rotations by 32 bits via new gen_<insn>32di2_doubleword below. (<anyrotate>32di2_doubleword): New define_insn_and_split that splits after reload as either a pair of move instructions or an xchgl (using gen_swapsi). gcc/testsuite/ChangeLog * gcc.target/i386/xchg-3.c: New test case. This should now be fixed on both mainline and the GCC 12 release branch. (In reply to Roger Sayle from comment #27) > This should now be fixed on both mainline and the GCC 12 release branch. Thanks everybody. Looks like the xchg optimization isn't in the gcc-12 release branch, but the stack use looks reasonable from my (very limited) testing.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105930
CC-MAIN-2022-40
refinedweb
3,950
57.81
Introduction : React native doesn’t provide any inbuilt component to make blur view. But we can use react-native-blur library provided by the react-native-community (@react-native-community/blur). It is easy to integrate and works seamlessly on both Android and iOS. In this post, I will show you how to use this blur library with example on both Android and iOS. How to install react-native-community/blur : This library is can be installed via npm or yarn : yarn add @react-native-community/blur or npm install --save @react-native-community/blur You don’t have to link this library manually for react native 0.6 and higher versions. For older react native, you can link it as like below : react-native link @react-native-community/blur For iOS, install this library from cocoapods : npx pod-install That’s it. You can now import and use it as like below : import { BlurView } from "@react-native-community/blur"; Properties of BlurView : Following are the important properties of BlurView. You can check this github page to learn more about other properties : Type of blur : Type of blur is assigned by blurType property. It can be any of the below options : - xlight : extra light blur - light : light blur - dark : dark blur - extraDark : Extra dark blur. Available only for tvOS - regural : Regular blur. Available only for iOS 10+ and tvOS - prominent : Prominent blur. Available only for iOS 10+ and tvOS Amount of blur : Amount of blur is defined by the blurAmount property. It is a number. By default it is 10. You can set it a value in the range of 0 to 100. For Android, maximum is 32. Anything above will be changed to 32. iOS : reducedTransparencyFallbackColor : This is a color property. It defines the color to use if ReduceTransparency is enabled in iOS. Android : blurRadius : By default, it matches blurAmount value. It is a number and you can assign between 0 and 25. On Android, this property manually adjust the blur radius. Android : downsampleFactor : It is a number and takes value between 0 and 25. It scales down the image before blurring on Android. Android : overlayColor : It is a color property to set one custom overlay color on Android. Example program : In this example program, we will add one blur view and one circular image view on top. Adding a blur view is similar to other views. We can add style and place it wherever we want. Below is the complete program : import React from 'react'; import {BlurView} from '@react-native-community/blur'; import {StyleSheet, View, Image} from 'react-native'; export default function HomeScreen() { return ( <View style={styles.container}> <Image style={styles.absolute} source={{ uri: '', }} /> <BlurView style={styles.absolute} <View style={styles.roundImageBackground}> <Image style={styles.roundImage} source={{ uri: '', }} /> </View> </View> ); } const styles = StyleSheet.create({ container: { justifyContent: 'center', alignItems: 'center', flex: 1, }, absolute: { position: 'absolute', top: 0, left: 0, bottom: 0, right: 0, }, roundImage: { height: 200, width: 200, }, roundImageBackground: { backgroundColor: 'white', height: 300, width: 300, borderRadius: 150, alignItems: 'center', justifyContent: 'center', }, }); We don’t have to place one view inside the BlurView. We can place it below to appear it on top. On Android, it looks like : And on iOS, it looks similar.
https://www.codevscolor.com/react-native-blur-view
CC-MAIN-2020-40
refinedweb
534
58.38
Important: Please read the Qt Code of Conduct - Only QPushButton don't call repaint ? Hi guys I have experienced very strange behavior on my mini Qt App? I put empty QComboBox to my dialog which is connected to a QPushButton and QRadioButton. Both of QPushButton and QRadioButton has connected a func which simply call AddItem func on QComboBox. After the QRadioButton has been clicked the Combobox gets a new item and it appears right away. If I clicked on QPushButton though, there is a new item as well but only appears if I lost the focus and get back to the dialog. So it looks like there is a repaint issue associated only with a QPushButton. PS: In the older version, I not experienced this behavior and my current setup is : MacOs: 10.15.4 Xcode: 11.5.1 Qt: 5.13.2 Qt Creator 4.12.1 Code below: #include "mainwindow.h" #include "ui_mainwindow.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent) , ui(new Ui::MainWindow) { ui->setupUi(this); } MainWindow::~MainWindow() { delete ui; } void MainWindow::on_pushButton_clicked() { ui->comboBox->addItem("insert 1"); } void MainWindow::on_radioButton_toggled(bool checked) { ui->comboBox->addItem("insert 1"); } Hi I have seen some bug reports about MacOs and repainting. Please try void MainWindow::on_pushButton_clicked() { ui->comboBox->addItem("insert 1"); ui->comboBox->update(); } and see if it then works every time. @mrjj said in Only QPushButton don't call repaint ?: ui->comboBox->update(); It not helped at all. Still same behavior. @Tony123 Ok and the bug i was think of , seems to be fixed in that version. Did you update to Catalina recently ? @mrjj Currently I using Catalina 10.15.4 but I tested mini-app on 10.15.5 as well and the same issue appears. Could you test with a newer Qt`? Qt 51.5 says but Qt 5.13 and this issue is really odd so i wonder if its a compatibility issue. @mrjj Thanks for this information. I try to go to the newer version (5.14.2, 5.15) of Qt but with no luck on Mac, I also created an issue related to this which is not resolved yet. This post is deleted!
https://forum.qt.io/topic/115342/only-qpushbutton-don-t-call-repaint
CC-MAIN-2020-40
refinedweb
361
60.82
Python Assertion Helpers inspired by Shouldly Project description Requirements - forbiddenfruit - a version of python with which forbidden fruit will work (must implement the CTypes/CPython Python API) - Python 2.7 or 3.3 (it may work with other versions, such as other 3.x versions, but it has not been tested with these versions) Assertions See ASSERTIONS.rst Example >>> import should_be.all >>> class Cheese(object): ... crackers = 3 ... >>> swiss = Cheese() >>> swiss.crackers.should_be(4) AssertionError: swiss.crackers should have been 4, but was 3 Note Because of the way the Python REPL shows stack traces, if the ‘should_be’ assertion is typed in a line on the REPL, ‘(unknown)’ will show instead of ‘swiss.crackers’. This is not an issue when the ‘should_be’ statement is in a file instead. Installation The easy way $ sudo pip install The slightly-less-easy way $ git clone $ cd should_be $ ./setup.py build $ sudo ./setup.py install Extending Writing your own assertions is fairly easy. There are two core parts of ShouldBe: BaseMixin and should_follow. All assertions should be placed in classes that inherit from BaseMixin. BaseMixin provides the basic utilities for extending built-in objects with your assertions. The class which holds your assertions should have a class variable called target_class. This is the class on which your assertions will be run. By default, this is set to object. If you wish to have your assertions run on object, there are a few additional considerations to make (see warning below). Then, assertions should be defined as instance methods. Each method should call self.should_follow one or more times. Think of should_follow as assertTrue on steroids. It has the following signature: should_follow(self, assertion, msg=None, **kwargs). Obviously, assertion is an expression which, when False, causes should_follow to raise an AssertionError. So far, pretty normal. msg is where things get interesting. msg should be a new-style Python format string which contains only named substitutions. By default, should_follow will pass the txt and self keys to the format method, in addition to any keyword arguments passed to should_follow. self is, obviously, the current object. txt is the code that represents the current object. For instance, if we wrote (3).should_be(4), txt would be ‘(3)’. If we wrote cheese.variety.should_be(‘cheddar’), txt would be ‘cheese.variety’. Once all of your assertions are written, you can simply write MyAssertionMixin.mix() to load your assertions. A setuptools hook is on the way for autoloading custom assertion mixins with import should_be.all. Warning When you extend object, you need to also create the proper mixins for NoneType, since None doesn’t have instance methods per-se (since self gets set to None, the Python interpreter complains). Thankfully, this is fairly easy. Simply create a class which inherits from NoneTypeMixin, and set the class variable source_class to the name of your object assertions class. You can then simply run MyNoneTypeMixin.mix(), and your methods will be automatically retrieved and converted from your object mixin class. Note Assertions for ABCs (such as Sequence) will be automatically mixed in to ‘registered’ classes that do not inherit methods from the ABCs normally (such as list, etc) when the mix() method is called (this will also check for classes that are registered to subclasses of the ABCs). Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/shouldbe/
CC-MAIN-2019-04
refinedweb
570
68.36
Hi! I’m new to all this and I have a question about my TowerPro SG-5010 servo. With this code I’m able to move a micro servo by rotating a pot. But when i swap the micro servo for my TowerPro (a lot bigger) it won’t work. Is this a power related problem? What do i need to do to make use of my 5010? Thank you very much! #include <Servo.h> Servo myservo; int sensorPin = 0; int sensorValue = 0; void setup() { myservo.attach(9); } void loop(){ sensorValue = analogRead(sensorPin); int place = map(sensorValue, 0, 1023, 180, 0); myservo.write(place); }
https://forum.arduino.cc/t/noob-servo-question-towerpro-sg-5010-wont-move/62502
CC-MAIN-2022-40
refinedweb
104
76.62
On Sun, 20 Aug 2006 16:31:01 -0400 Steven Huwig <address@hidden> wrote: #> > #> Is there anyone here who knows Python and can advise me on #> > #> whether to install this patch? #> > #> > Apparently not :( #> #> I tried the patch and it didn't seem to affect anything negatively. #> I have always assumed that there was some reason that the mode #> forced a new namespace, but perhaps that is not the case. I actually thought so as well... but was unable to figure out what the reason could be. And I often play with the Python interpreter after executing files, so having to type "emacs." every so often was quite annoying. #> > Therefore I do not think it is a good idea to give up developing #> > our version. #> #> I do use python-mode from CVS Emacs exclusively when coding Python. #> It works well enough as a basic mode, though the interactive portions #> are primitive when compared to some other Emacs modes. Yes, I agree. For writing code, the mode is very good. I personally use the interactive features quite a bit, and they indeed could use some extra attention. #> >. #> #> I am interested in testing and contributing fixes to python-mode as #> well. Great! #> > Would saying "GNU python-mode" make sense? #> #> How about "Emacs 22 python-mode?" I am not sure how unambiguous that would be... it will be pretty clear until the release takes place, but probably not so much afterwards. And the release is (probably) going to happen sooner or later ;) -- Best wishes, Slawomir Nowaczyk ( address@hidden ) A phoneless cord: For people who like peace and quiet.
https://lists.gnu.org/archive/html/emacs-devel/2006-08/msg00705.html
CC-MAIN-2021-17
refinedweb
264
74.79
The presentation of this document has been augmented to identify changes from a previous version. Three kinds of changes are highlighted: new, added text, changed text, and deleted text. See also translations. This document is also available in these non-normative formats: Copyright © 2007 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document usein a set of eight documentsthathave progressed to Recommendationtogether (XQuery 1.0, XQueryX 1.0, XSLT 2.0, DataModel, Functions and Operators, Formal Semantics, Serialization,XPath2.0). Thisis a Recommendation ofthe W3C. Ithasbeen developed by the W3C XMLQuery Working Group, whichis partof the XMLActivity. This document has been reviewed by W3C Members,by software developers, and by other W3C groups and interestedparties, andis endorsed by the Director as a W3C Recommendation.It is a stabledocument and may be usedas reference material or cited from another document. W3C's role in making theRecommendation is todraw attentionto thespecification and to promote itswidespread deployment.This enhancesthe functionality and interoperability of the Web. This documentincorporates minor changes madeagainst the Proposed Recommendation of 21 November 2006;please see the public disposition of commentsfor details. Changesto this document sincethe Proposed Recommendation are detailed in the J Revision Log. PleasereportQuery]” in the subject line of your report, whether made in Bugzilla or in email. Each Bugzilla entry and email message should contain only one error report. Archives of the comments and responses.4.5 URI Literals Constructors 3.7.1 Direct Element Constructors 3.7.1.1 Attributes 3.7.1.2 Namespace Declaration Attributes 3.7.1.3 Content 3.7.1.4 Boundary Whitespace 3.7.2 Other Direct Constructors 3.7.3 Computed Constructors 3.7.3.1 Computed Element Constructors 3.7.3.2 Computed Attribute Constructors 3.7.3.3 Document Node Constructors 3.7.3.4 Text Node Constructors 3.7.3.5 Computed Processing Instruction Constructors 3.7.3.6 Computed Comment Constructors 3.7.4 In-scope Namespaces of a Constructed Element 3.8 FLWOR Expressions 3.8.1 For and Let Clauses 3.8.2 Where Clause 3.8.3 Order By and Return Clauses 3.8.4 Example 3.9 Ordered and Unordered Expressions 3.10 Conditional Expressions 3.11 Quantified Expressions 3.12 Expressions on SequenceTypes 3.12.1 Instance Of 3.12.2 Typeswitch 3.12.3 Cast 3.12.4 Castable 3.12.5 Constructor Functions 3.12.6 Treat.2.1 Schema Import Feature 5.2.2 Schema Validation Feature 5.2.3 Static Typing Feature 5.2.3.1 Static Typing Extensions 5.2.4 Full Axis Feature 5.2.5 Module Feature 5.2.6 Serialization Feature 5.3 Data Model Conformance A XQuery.2.1 Interoperability Considerations G.2.2 Applications Using this Media Type G.2.3 File Extensions G.2.4 Intended Usage G.2.5 Author/Change Controller G.3 Encoding Considerations G.4 Recognizing XQuery Files G.5 Charset Default Rules G.6 Security Considerations H Glossary (Non-Normative) I Example Applications (Non-Normative) I.1 Joins I.2 Grouping I.3 Queries on Sequence I.4 Recursive Transformations I.5 Selecting Distinct Combinations J Revision Log (Non-Normative). The Query Working Group has identified a requirement for both a non-XML]. [Definition: XQueryQuery expressions. [XQuery 1.0 and XPath 2.0 Formal Semantics] defines the static semantics of XQuery and also contains a formal but non-normative description of the dynamic semantics that may be useful for implementors and others who require a formal definition. The type system of XQuery is based on [XML Schema]. The built-in function library and the operators supported by XQuery]..) Note: This specification contains no assumptions or requirements regarding the character set encoding of strings of [Unicode] characters.. Certain namespace prefixes are predeclared by XQuery and bound to fixed namespace URIs. These namespace prefixes are as follows: xml = xs = xsi = fn = local = (see 4.15 Function Declaration.) In addition to the prefixes in the above list, this document uses the prefix err to represent the namespace URI (see 2.3.2 Identifying and Reporting Errors). This namespace prefix is not predeclared and its use in this document is not normative. Element nodes have a property called in-scope namespaces. [Definition:.] Note: In [XPath 1.0], the in-scope namespaces of an element node are represented by a collection of namespace nodes arranged on a namespace axis, which is optional and deprecated in [XPath 2.0]. XQuery does not support the namespace axis and does not represent namespace bindings in the form of nodes. However, where other specifications such as [XSLT 2.0 and XQuery 1.0 Serialization] refer to namespace nodes, these nodes may be synthesized from the in-scope namespaces of an element node by interpreting each namespace binding as a namespace node. [Definition:. [Definition: The expression context for a given expression consists of all the information that can affect the result of the expression.] This information is organized into two categories called the static context and the dynamic context. ]. The individual components of the static context are summarized below. Rules governing the scope and initialization of these components can be found in C.1 Static Context Components. [Definition: XPath 1.0 compatibility mode. This component must be set by all host languages that include XPath 2.0 as a subset, indicating whether rules for compatibility with XPath 1.0 are in effect. XQuery sets the value of this component. Some namespaces are predefined; additional namespaces can be added to the statically known namespaces by namespace declarations in a Prolog and by namespace declaration attributes in direct element constructors. . If the Schema Import Feature is supported, in-scope schema types also include all type definitions found in imported schemas. ] [Definition:. ]:.] [Definition: In-scope variables. This is a set of (expanded QName, type) pairs. It defines the set of variables that are available for reference within an expression. The expanded QName is the name of the variable, and the type is the static type of the variable.] Variable declarations in a Prolog are added to in-scope variables. An expression that binds a variable (such as a let, for, some, or every expression) extends the in-scope variables of its subexpressions with the new bound variable and its type. Within a function declaration, the in-scope variables are extended by the names and types of the function parameters. The static type of a variable may be either declared in a query or (if the Static Typing Feature is enabled) inferred by static type inference rules as described in [XQuery 1.0 and XPath 2.0 Formal Semantics]. [Definition: Context item static type. This component defines the static type of the context item within the scope of a given expression.] . The function signatures include the signatures of constructor functions, which are discussed in 3.12.5 Constructor Functions. [Definition: Statically known collations. This is an implementation-defined set of (URI, collation) pairs. It defines the names of the collations that are available for use in processing queries and expressions.] [Definition: A collation is a specification of the manner in which strings and URIs are compared and, by extension, ordered. For a more complete definition of collation, see [XQuery 1.0 and XPath 2.0 Functions and Operators].] [Definition:.] [Definition:.] [Definition: Ordering mode. Ordering mode, which has the value ordered or unordered, affects the ordering of the result sequence returned by certain path expressions, union, intersect, and except expressions, and FLWOR expressions that have no order by clause.] Details are provided in the descriptions of these expressions. [Definition: Default order for empty sequences. This component controls the processing of empty sequences and NaN values as ordering keys in an order by clause in a FLWOR expression, as described in 3.8.3 Order By and Return Clauses.] Its value may be greatest or least. [Definition: Boundary-space policy. This component controls the processing of boundary whitespace by direct element constructors, as described in 3.7.1.4 Boundary Whitespace.] Its value may be preserve or strip. [Definition: Copy-namespaces mode. This component controls the namespace bindings that are assigned when an existing element node is copied by an element constructor, as described in 3.7.1 Direct Element Constructors. Its value consists of two parts: preserve or no-preserve, and inherit or no-inherit.] :.] [Definition:.] [Definition: [XML Schema] for the range of legal values of a timezone.] [Definition:.] The set of available documents is not limited to the set of statically known documents, and it may be empty.. ,. [Definition: Default collection. This is the sequence of nodes that would result from calling the fn:collection function with no arguments.] The value of default collection may be initialized by the implementation. XQuery is defined in terms of the data model and the expression context. XDM instance. This process occurs outside the domain of XQuery,QueryQuery. The in-scope schema definitions in the static context query is parsed into an internal representation called the operation tree (step SQ1 in Figure 1). A parse error is raised as a static error [err:XPST0003]. The static context is initialized by the implementation (step SQ2). The static context is then changed and augmented based on information in the prolog (step SQ3). If the Schema Import Feature is supported, the in-scope schema definitions are populated with information from imported schemas. If the Module Feature is supported, the static context is extended with function declarations and variable declarations from imported modules.". Note: The data model permits an element node to have fewer in-scope namespaces than its parent. Correct serialization of such an element node would require "undeclaration" of namespaces, which is a feature of [XML Names 1.1]. An implementation that does not support [XML Names 1.1] is permitted to serialize such an element without "undeclaration" of namespaces, which effectively causes the element to inherit the in-scope namespaces of its parent. In order for XQuery to be well defined, the input XDM instance, the static context, and the dynamic context. For each variable declared as external: If the variable declaration includes a declared type, the external environment must provide a value for the variable that matches the declared type, using the matching rules in 2.5.4 SequenceType Matching. If the variable declaration does not include a declared type, the external environment must provide a type and a matching value, using the same matching rules. For each function declared as external: the function implementation must either return a value that matches the declared result type, using the matching rules in 2.5.4 SequenceType Matching, or raise an implementation-defined error. For a given query, define a participating ISSD as the in-scope schema definitions of a module that is used in evaluating the query. If two participating ISSDs contain a definition for the same schema type, element name, or attribute name, the definitions must be equivalent in both ISSDs. Furthermore, if two participating ISSDs each contain a definition of a schema type T, the set of types derived by extension from T must be equivalent in both ISSDs. Also, if two participating ISSDs each contain a definition of an element name E, the substitution group headed by E must be equivalent in both ISSDs. In the statically known namespaces, the prefix xml must not be bound to any namespace URI other than, and no prefix other than xml may be bound to this namespace URI. As described in 2.2.3 Expression Processing, XQueryQueryXYYnnnn, where: err denotes the namespace for XPath and XQuery errors,. This binding of the namespace prefix err is used for convenience in this document, and is not normative. XX denotes the language in which the error is defined, using the following encoding: XP denotes an error defined by XPath. Such an error may also occur XQuery since XQuery includes XPath as a subset. XQ denotes an error defined by XQuery.Query to another. However, the contents of this namespace may be extended to include additional error definitions. The method by which an XQuery]. $N[@x castable as xs:date][xs:date(@x) gt xs:date("2000-01-01")] To avoid unexpected errors caused by expression rewrite, tests that are designed to prevent dynamic errors should be expressed using conditional or typeswitch expressions. Conditional and typeswitch query, even if this order is implementation-dependent.] Within a tree, document order satisfies the following constraints: The root node is the first node. Every node occurs before all of its children and descendants. Attribute nodes immediately follow].] typed value is returned (err:FOTY0012 is raised if the node has no typed value.) Atomization is used in processing the following types of expressions: Arithmetic expressions Comparison expressions Function calls and returns Cast expressions Constructor expressions for various kinds of nodes order by clauses in FLWOR expressions Under certain circumstances (listed below), it is necessary to find the effective boolean value of a value. [Definition: The effective boolean value of a value is defined as the result of applying the fn:boolean function to the value, as defined in [XQuery 1.0 and XPath 2.0 Functions and Operators].]. Note: The effective boolean value of a sequence that contains at least one node and at least one atomic value may be nondeterministic in regions of a query where ordering mode is unordered. The effective boolean value of a sequence is computed implicitly during processing of the following types of expressions: Logical expressions ( and, or) The fn:not function The where clause of a FLWOR expression Certain types of predicates, such as a[b] Conditional expressions ( if) Quantified expressions ( some, every) Note: The definition of effective boolean value is not used when casting a value to the type xs:boolean, for example in a cast expression or when passing a value to a function whose expected parameter is of type xs:boolean. XQueryQuery. In certain places in the XQuery grammar, a statically known valid URI is required. These places are denoted by the grammatical symbol URILiteral. For example, URILiterals are used to specify namespaces and collations, both of which must be statically known. Syntactically, a URILiteral is identical to a StringLiteral: a sequence of zero or more characters enclosed in single or double quotes. However, an implementation MAY raise a static error [err:XQST0046] if the value of a URILiteral is of nonzero length and is not in the lexical space of xs:anyURI. As in a string literal, any predefined entity reference (such as &), character reference (such as •), or EscapeQuot or EscapeApos (for example, "") is replaced by its appropriate expansion. Certain characters, notably the ampersand, can only be represented using a predefined entity reference or a character reference. character references, so writing a new
http://www.w3.org/TR/2007/REC-xquery-20070123/diff-from-20061121.html
crawl-001
refinedweb
2,455
51.34
Enumerator Goodness Even With Legacy Code Our codebase has a lot of fairly old Ruby code that isn't quite up to snuff on the current best practices. One area in which this is particularly true is in how we deal with blocks. It's not uncommon to see some code like this in our codebase: def find_each(row_id, start: nil) current = start loop do res = client.get_chunk(row_id, start: current) break if res.empty? yield res current = res.last end end This is a pretty common thing to see in Ruby. This code could be doing something like walking a Cassandra row. Yielding records this way is really important so we don't blow up either our databases or our workers as it gives Ruby time to GC the results between yields. Unforuntately however, this still isn't great. Our jobs are more or less generic, so it makes more sense to instead pass around different streams. One approach you could take is to duck-type on some method (say :find_each), but that can still be pretty limiting and difficult to maintain as our application rapidly grows. Also, what if we only want the first item of a stream or something? With the above approach we'd have to do something along the lines of: def find_first(row_id) find_each(row_id) do |slice| return slice.first end end This is a border-line GOTO statement if you ask me. A better approach would be to use an Enumerator. In case you're not familiar with using enumerators, I think it's easiest to just walk through an example: def find_each(row_id, start: nil) Enumerator.new do |yielder| current = start loop do res = client.get_chunk(row_id, start: current) break if res.empty? yielder.yield res current = res.last end end end Enumerator.new takes a block that yields a yielder. This is just an object that you can call yield on (if you're curious, Ruby uses Fibers under the hood to switch execution contexts). Just like above, you can simply pass it the thing you want to yield. Why do we do this? This makes our stream much more flexible and allows us to pass it around as an object and tack things on (and make it lazy which I won't cover here but you should play around with). Here's what our find_first looks like now: def find_first(row_id) find_each(row_id).first end Can't get much simpler than that. However we still have one more slight problem. Our API which worked with blocks before is now broken: find_each(row_id) { |slice| puts slice } #=> Doesn't work! :( Digging through the Rails codebase I found a few examples of the use of to_enum (or enum_for, they are aliased). This method is cool because it lets you write your methods with just yield but then allows you to wrap them in enumerators very easily. Implementing with it looks like: def find_each(row_id, start: nil) return to_enum(:find_each, row_id, start: start) unless block_given? current = start loop do res = client.get_chunk(row_id, start: current) break if res.empty? yield res current = res.last end end This first checks if a block is given to the method. If so, continue and use it. If not, create an Enumerator using this method. This is equivalent to wrapping the rest of the body in the Enumerator.new {...} syntax we used previously. The first argument to to_enum is the method name, and the rest are arguments that will get passed to that method. The end result of this approach is a solution that has all of the conveniences of both the Enumerator and the direct yield approach: def find_first(row_id) find_each(row_id).first end find_each(row_id) { |slice| puts slice } #=> will print each slice to_enum can also be used to help me with our legacy code; I don't have to go around touching a bunch of old code to have enumerators if I don't want to. I can simply use to_enum with the first implementation: to_enum(:find_each, row_id).first Enumerators are one of the most powerful Ruby tools out there. They can help you write code that is not only elegant and flexible but also highly performant (e.g. batching requests to reduce bandwidth). Moreover, you can do all of this transparently without the consumer of your Enumerator having to know the details of how the records are fetched. Ultimately it's well worth the investment to take the time to master them.
https://jbodah.github.io/blog/2016/05/09/enumerator-goodness-even-legacy-code/
CC-MAIN-2017-13
refinedweb
747
73.98
Re: [basex-talk] catalog.xml - xsd - urn On 21.10.2019 22:05, SW-Service wrote: xmlns: Somewhat unrelated to your initial question, and it was probably someone else’s idea to put a minor version number in a namespace – but this is considered bad practice. All processing applications, for Re: [basex-talk] catalog.xml - xsd - urn tl;dr – Don’t bother that the namespace is a URN. – Don’t confuse namespaces with schema locations. – Apparently BaseX cannot use a catalog resolver for resolving schema locations. – Use other more or less portable ways for accessing the schemas, for ex. store them in a database or put the [basex-talk] catalog.xml - xsd - urn Good day, Where is the catalog.xml file stored? I want to validate xml files against XSD, but the xsd is referenced via an urn. thank you very much Guten Tag, Wo wird die catalog.xml Datei abgelegt? Ich möchte xml-Dateien gegen XSD validieren, aber die XSD wird über einen urn referenziert.
https://www.mail-archive.com/search?l=basex-talk%40mailman.uni-konstanz.de&q=subject:%22%5C%5Bbasex%5C-talk%5C%5D+catalog.xml+%5C-+xsd+%5C-+urn%22&o=newest
CC-MAIN-2020-40
refinedweb
166
69.89
Stencyl (HaXe) Extension This week I've started using Colyseus. I'm going to try to create a HaXe Extension for Stencyl so that I can publish multiplayer games with Colyseus. Previously I've created a couple of multiplayer extensions for Stencyl but failed to get anything near realtime. Maybe Colyseus can help?!? From first glance the Colyseus stuff looks great. I love the fact that I can run the server on my own infrastructure. It will be a long road ahead and I've decided to start this Showcase thread to let you all follow my progress. Maybe you can help me along the way?!? First I started by porting the example nyancat to a Stencyl game (Using a bitmap for the cat) Flash publication failed! I've tried to put the flash (swf) on the same server and experimented with crossdomain.xml. But apparently websockets do not care about that. Working clients: - HTML5 (Tested only Chrome) - Windows native - Mac OSX native - iOS (iPad mini) - iOS Simulator (iPhone 5s) - Android (Samsung S3 Note) - Linux Since I like developers journals myself, I'm writing one for the Stencyl Colyseus Extension journey. The main idea of the Stencyl Extension is a general purpose Server with accommodating Stencyl blocks. Not sure if that is the right approach, but that is the path I take now. Goals: - Application ID system - Lobby system - Turn Based system - System that one client acts as a server for the other clients and itself - Chat system First, I want something persistent. Currently the thought process is that there is a LobbyRoom. That can be used for MatchMaking but I hope to use it for public storage of the application IDs. Persistence The journey leads to investigations on how to store application ids. This can be using a simple text file since I'm aiming for a single server (at this time) and I hopefully don't need a (shared) database server. Test LocalPresence : adjusted rooms/01-chat-room.ts import { LocalPresence } from "colyseus"; after maxClients = 4; private presence:LocalPresence=new LocalPresence(); Added to onMessage: var msg = this.presence.hget("MdotTest","Field"); console.log("Last Message: ", msg); this.presence.hset("MdotTest", "Field", data.message); This works really well. When leaving room and joining again it still knew the last message. However, when I kill the server and start it up the data is lost. I couldn't find any writing to files in In RedisPresence there is use of promisfy but no where is something with files (at least so far I can see) How do we do server reboot persistence?! This is what I hacked together and I'm sure there is something else that I should be doing so please point m.e. to the better thing!!! import { readFileSync,writeFileSync } from "fs"; Added to onJoin: var jsonfile=JSON.parse(readFileSync('temp.txt','utf8')); var item=jsonfile.hash; //console.log('item', item.MdotTest.Field); this.presence.hset("MdotTest", "Field", item); Added to onDispose: writeFileSync("temp.txt", JSON.stringify(this.presence)); Now at least the data is persistent even when there is a server boot/crash/shutdown (not when it crashes when writing ...) hey @mdotedot, not exactly sure what you're planning to build, but to persist the data across server reboots you can use the RedisPresencealternative. The LocalPresencereally doesn't persist any data. Cheers! Thanks Endel! It appears that RedisPresence needs a Redis Server! When using RedisPresence and you don't have a running Redis Server you get this: Could not connect to Redis at 127.0.0.1:6379: Connection refused For running a server this quick start is available : I hate to run different things on the server since I want to have deployment by the Extension users as easy as it can be. Maybe I will revisit this approach when I need persistence. I've changed the approach to ApplicationIDs and 'just' generate some IDs for the Extension Users. In the chat I've been told that one of my room-mechanism approach will fail. We will see where this thing is going to crash ... I'm not promising that I can create a general purpose server for all kinds of clients. If an approach don't work, extension users still have the ability to change the server themselves. Although they probably lack the skills, they can hire you guys to make the server for them :D There will be overhead on the proxy mechanism, but we will see what can be done. Rather than saying that running logic on client is impossible I still want to try it. Using the Colyseus server as a proxy to the active client. For now I've created a turn based mechanism on the server that I need to write some clients for. The example : 02-state-handler.ts appears to be working correctly with below additions, but I guess that I need to investigate different types of turn based games and their server-side logic. These are the steps taken to rotate player activity: (Modify 02-state-handler.ts) After : something = "This ... nextPlayer(){ var doNext=false; for(var m in this.players){ if(doNext){ this.players[m].ActivePlayer=m; doNext=false; }else{ if(this.players[m].ActivePlayer != ""){ this.players[m].ActivePlayer=""; doNext=true; } } } // check Next available player if(doNext){ // if we need to shift but there are no players after : first player for(var m in this.players){ if(doNext){ this.players[m].ActivePlayer=m; doNext=false; } }// go through all players } // check first available player }// ActivePlayer shift // Changed createPlayer createPlayer (id: string, room:Room) { this.players[ id ] = new Player(); if(room.clients.length == 1){ // first player in room this.players[id].ActivePlayer=id; } } // createPlayer In removePlayer we need to check if the leaving player is the active player and rotate when necessary: if(this.players[id].ActivePlayer!="")this.nextPlayer(); movePlayer includes a check on activity: After movePlayer(id: string, movement: any){ if(this.players[id].ActivePlayer != id) return; // ignore move when not active And before the end of the function: this.nextPlayer(); // after move Adjusted class Player with the ActivePlayer : export class Player { x = Math.floor(Math.random() * 400); y = Math.floor(Math.random() * 400); ActivePlayer = ""; } Adjusted onJoin : this.state.createPlayer(client.sessionId, this); When running this example the players can only move when it is their turn. It is a basic example with no validation on move or if time expired etc.. This week I've tried to make a few Stencyl Extension blocks: - get Player (ID / Name) - get Room (ID / Name) - assign Name [..] to [room/player] with id [] - List of Rooms - List of Players It took me a while to find out that client.id != player.id. It appears that player is instantiated with room.SessionId. Then it took me a lot longer to get a list of rooms (of the same type). getAvailableRooms seems to show a list of rooms where you can join to, not the ones that are full or something like that. I (also) want rooms with large number of possible clients, so the client is now always joining the same room where I want the client to be able to create a new room. Apparently we need to use the MatchMaker to get a list of rooms. But I fail to get it to use. It is a Server component, but how do I access this from the client when the client isn't in a room? I've also looked at the example on MatchMaker but it registered the rooms differently then I was seeing in the 02-state-change example Source: At last I've hacked something together that appears to be working for now. globals.ts: // // Global data for the server // export abstract class Globals{ private static theRooms = []; public static set(index:string,value:string){ this.theRooms[index]=value; } // get, remove as well when room gets disposed } This static class can hold 'server-wide' globals as long as you import them in your room-code, like: import { Globals } from "../globals"; // global data from the server // onInit : Globals.set(""+this.roomId, "state_handler"); This method allows me to get the list of rooms from other client sessions and even for other rooms when I want... I'm sure that there is another way to get this done using the MatchMaker but I couldn't find example of this. Since a couple of days I'm making a mess of things :( I have a feeling that I'm doing this the wrong way. It all started with the 02-state-handler.ts What I conclude is that you have a client that has an ID and that is constant no matter what room is created or joined. However each joining or creation of a room is done with a session ID. SessionID != Client.ID I want the game(s) to be able to get a list of players in the room. export class RoomData{ playernames = ""; roomnames = ""; roomids = ""; sessionids = ""; } export class Player { ID = ""; Name = ""; x = Math.floor(Math.random() * 400); y = Math.floor(Math.random() * 400); width = 132; height = 132; } Currently I use the RoomData class to alter the state of the server. These RoomData members are being broadcasted to the clients. It feels kind of hacky to set these variables to have the roomstate being synchronized. The way that I'm currently setting these values is in client (room.send) and server onMessage communications. This is the current client perspective: New Room: Second Room: Joining From the other Room (looks like it works) And second player joining the third (this fails : player seems to be in two rooms ??!? ) Server Side Data : roomName with id: RoomName:3tAeV4aU7 the changeID is pQZlJiSNb the ID without Name: 3tAeV4aU7 the name is Room424 roomName with id: RoomName:MZOaBj7hu the changeID is pQZlJiSNb the ID without Name: MZOaBj7hu the name is Room6959 roomName with id: RoomName:pQZlJiSNb the changeID is pQZlJiSNb the ID without Name: pQZlJiSNb the name is Room7704 roomnames: 3tAeV4aU7=Room424:MZOaBj7hu=Room6959:pQZlJiSNb=Room7704: playername with id: PlayerName:iwIpOcjZs the changeID is pQZlJiSNb the id without name: iwIpOcjZs the name is Player8707 playername with id: PlayerName:j6a-rk8dc the changeID is pQZlJiSNb the id without name: j6a-rk8dc the name is Player4301 playername with id: PlayerName:EsP17dY-m the changeID is pQZlJiSNb the id without name: EsP17dY-m the name is Player1374 playernames: iwIpOcjZs=Player8707:j6a-rk8dc=Player4301:EsP17dY-m=Player1374 As it is fairly hacky stuff I'm hoping that someone has dealt with this before and can shed some light .... When I don't use Client.ID and only show SessionIDs it worked much better, but how can I get a client (name) from a session? The Player Class data does not appear to be part of a state. What should I do to get Player List with Names?! Since I couldn't get HaXe Json to work (it doesn't know the format) I used Reflection to get the information I need: // Get The Player ID/Name var thePlayers:Dynamic = Reflect.field(state, "players" ); var sessionIDs:Array<Dynamic> = Std.string(thePlayers).split(": {"); for(sessionID in sessionIDs){ var sessionid:String=""; if(sessionID.indexOf("{") == 0) sessionid=sessionID.split("{")[1]; // for the first player if(sessionID.indexOf("},") > 0) sessionid=sessionID.split("},")[1]; // for all the rest if(sessionid == null) sessionid=sessionID; var onePlayer:Dynamic = Reflect.field(thePlayers, StringTools.trim(sessionid)); var ID:String = Reflect.field(onePlayer, "ID" ); if(ID != null && ID.length > 0) trace("playerlist One Player . ID : "+ID); var Name:String = Reflect.field(onePlayer, "Name" ); if(Name != null && Name.length > 0) trace("playerlist OnePlayer . Name : " + Name); } // for all sessionIDs in the room Now it can get the ID and Name from the Player Class ! This seems a lot better than the previous approach. I can also now use the availableRooms construction since I use a method on the requestJoin where an option is given when joining a room as oposed to creating a new one... I might need to use the RoomData for the name of the Room. But maybe someone else has some opinions about it?! Client debugging ... Leaving both current and joining room with just one room.leave() call .... HTML5 works well, Windows is doing well most of the time, but Mac and Android are having troubles in leaving and joining. I suspect that Windows problems are caused by the same issue, but couldn't steadily reproduce that. It is the same HaXe client-code due to being a HaXe extension. There are only differences in Player Name gathering on HTML versus native. Didn't test iOS/Linux. What I think is happening is that room.leave() is done twice on Mac/Android?!? What can be the cause of this?! HTML5 debug: Colyseus.hx:156: JoinRoom to joinID: BWNT-kGd0 Calling Room Leave with room.id: NRL_w4txR Colyseus.hx:421: leavecounter: 1 room.onLeave RoomID: NRL_w4txR Session ID: XqE4dF-s6 Colyseus.hx:275: (Client: )p6_HgFPtE join room to JoinID : BWNT-kGd0 Good: Colyseus.hx:304: joinRoom : joinCounter: 2 Joined Room BWNT-kGd0 sessionID : VXeeGm4oI Debug Mac: Colyseus.hx:156: JoinRoom to joinID: z40eavIgr Calling Room Leave with room.id: Ge_RICPir Colyseus.hx:421: leavecounter: 1 room.onLeave RoomID: Ge_RICPir Session ID: f2c1YeU3i Colyseus.hx:275: (Client: )9EoXglrY9 join room to JoinID : z40eavIgr Colyseus.hx:421: leavecounter: 2 room.onLeave RoomID: z40eavIgr Session ID: null Never reached joinRoom!! And it does TWO leave rooms !!! The output on server side is the same (of course not the IDs and names ...) HTML5 Request join : this.roomId: NRL_w4txR options.Type: undefined PlayerName: APlayer3642 OldMax: 1 Number of clients in room: 1 the room id is : NRL_w4txR Request join : this.roomId: NRL_w4txR options.Type: undefined PlayerName: APlayer7523 OldMax: 100 Request join : this.roomId: NRL_w4txR options.Type: undefined PlayerName: APlayer7523 OldMax: 100 StateHandlerRoom created! { PlayerName: 'APlayer7523', RoomName: 'Room5672', requestId: 2, clientId: '3VTAK8HPh', lobby: { rooms: [] } } Request join : this.roomId: BWNT-kGd0 options.Type: undefined PlayerName: APlayer7523 OldMax: 1 Number of clients in room: 1 the room id is : BWNT-kGd0 Number of clients in room: 0 the room id is : NRL_w4txR Dispose StateHandlerRoom : remove room.id : NRL_w4txR Request join : this.roomId: BWNT-kGd0 options.Type: JOIN PlayerName: APlayer3642 OldMax: 100 Number of clients in room: 2 the room id is : BWNT-kGd0 Number of clients in room: 1 the room id is : BWNT-kGd0 MAC: Request join : this.roomId: Ge_RICPir options.Type: undefined PlayerName: APlayer7255 OldMax: 1 Number of clients in room: 1 the room id is : Ge_RICPir Request join : this.roomId: Ge_RICPir options.Type: undefined PlayerName: Player5710 OldMax: 100 Request join : this.roomId: Ge_RICPir options.Type: undefined PlayerName: Player5710 OldMax: 100 StateHandlerRoom created! { PlayerName: 'Player5710', RoomName: 'Room1021', requestId: 4, clientId: 'vJGYZCiJ5', lobby: { rooms: [] } } Request join : this.roomId: z40eavIgr options.Type: undefined PlayerName: Player5710 OldMax: 1 Number of clients in room: 1 the room id is : z40eavIgr Number of clients in room: 0 the room id is : Ge_RICPir Dispose StateHandlerRoom : remove room.id : Ge_RICPir Request join : this.roomId: z40eavIgr options.Type: JOIN PlayerName: APlayer7255 OldMax: 100 Number of clients in room: 2 the room id is : z40eavIgr Number of clients in room: 1 the room id is : z40eavIgr I even used a timer to call join after the leave so that it does it 5 or 10 seconds later. As soon as I do the join the leave call is executed !!! Colyseus : Leaving room Since a couple of weeks I'm debugging the leaving room situation. It is kind of driving m.e. nuts. Today I started over since I was not sure if my player information was ruining stuff or the leaving room situation. All seems well with HTML5. But the CPP builds (Windows and Android) are giving me problems. Most of the time the Windows build performs well, unless I try to do a Windows debug build. Then it crashes immediately. Running the Android version gives me almost immediately problems. (I estimate that it is doing leave room twice so the new room is left immediately) trace("input.readBytes catch : "+e); if(Std.is(e,Error)) trace("isError"); // still OK /* Output: input.readBytes catch : Custom(EOF) isError */ // Below line crashes: debug build //if( (e:Error).match(Error.Custom(Error.Blocked)) ) trace("ErrorCustom"); // Null Pointer Exception // And since it is used in the needClose detection it fails needClose = !(e == 'Blocking' || (Std.is(e, Error) && ( (e:Error).match(Error.Custom(Error.Blocked)) || (e:Error).match(Error.Blocked)) )); So I cannot debug using Visual Studio... But I don't care if Windows Debugger works as long as the issue can be resolved without it ... This is the code that works: public static function joinRoom(ID:String){ if(isInit){ var theOptions:Map<String, Dynamic>=new Map(); theOptions.set("Type", "JOIN"); theOptions.set("PlayerName", "TESTPLAYER" ); //if(room!=null) room.leave(); room = client.join(""+ID, theOptions); } }// joinRoom public static function createRoom(RoomType:String){ if(isInit){ var theOptions:Map<String, Dynamic>=new Map(); theOptions.set("Type", "NEW"); theOptions.set("PlayerName", "TESTPLAYER" ); //if(room!=null) room.leave(); room = client.join(RoomType, theOptions); } } But when I comment out the room.leave so that it actually leaves to current room and then Android gives me problems and sometimes Windows crashes or gives the same problem (0 clients in room and then the room will be autodisposed) Any suggestions are appreciated! By the way : if I don't use room.leave and use the colyseus monitor to dispose them it works fine(!) Also, I tried using a timer behind the room.leave to allow the engine to dispose the room but it looks like leave is called when I initiated another room. I also tried using an Array of rooms where I leave the room[roomCounter] but this acts like it is also leaving the initiated room Colyseus leaving room part 3 There hasn't been made any client side communication yet and I have worked for over a month on this room leaving situation. I really hope this isn't the way that all the other stuff is going to turn out :( Sending 'LEAVE' to the server and let the server disconnect the client is also not reliable! Modified SocketSys.hx to avoid crash on windows: // Do not use (e:Error).match(Error.Custom(Error.Blocked)) || needClose = !(e == 'Blocking' || (Std.is(e, Error) && ( (e:Error).match(Error.Blocked)) )); New direction : Plan for failure! Since I cannot do a steady leave room and since there can be other situations that lead to errors: I decided that the client/game needs to react on the errors. This led to changes to the Colyseus HaXe code since some errors (websocket and connection) aren't passed to Room.onError Connection.hx // This does not throw onError on room .. this message cannot be detected this.ws.onerror = function(message) { this.onError(message); // this isn't caught in Room.onError !!!! // Stencyl notification (import com.stencyl.behavior.Script.*;) if(getGameAttribute("ColyseusErrorMessage") == null)setGameAttribute("ColyseusErrorMessage", ""); setGameAttribute("ColyseusErrorMessage", ""+getGameAttribute("ColyseusErrorMessage")+" : "+message); //M.E. } Unfortunately Mac OSX build still produces crash sometimes [My Game] 2019-02-25 00:36:22.9, finalFileSize) failed on line 797: Bad file descriptor [My Game] 2019-02-25 00:36:22.9, fileLength) failed on line 868: Bad file descriptor DEBUG [Thread-17] stencyl.sw.util.net.SocketServer: _disconnected: Socket[addr=/127.0.0.1,port=49807,localport=18525] As well as the Android build for which I am unable to get logs. But it happens less than before. There are also situations where there are rooms left with 0 clients in them, even when the application/game is killed. And even the Colyseus monitor Now it is time to work on the room information like a playerlist. Hopefully this does not take much time and doesn't interfere with the current workaround. Still, if anyone has any tips , experiences or possible things I can try relating to the (e:Error).match(Error.Custom(Error.Blocked)) || or the Mac OSX Bad file descriptor I very much like to know!! hey @mdotedot, are you going to open-source your Stencyl extension? I'd like to check the errors you're having, calling .leave()really shouldn't crash the application. Cheers! If all goes well the Stencyl Extension will become available to the Stencyl Community. It would be awesome if you could help me with the crash-code. Maybe you know of an alternative way to do the match(Error.Custom(Error.Blocked) line. @mdotedot would you mind sharing your extension on GitHub? if you don't wanna make it public for now, you can invite me privately first maybe. Cheers Steps to reproduce: Followed : to install NPM - Download and run: \users\Administrator\Downloads\node-v10.15.1-x64.msi set PATH=c:\"Program Files"\nodejs;%HAXE_PATH%;%NEKOPATH%;%PATH% node -v npm install typescript Download colyseus examples: : Clone/Download ZIP Extract in C:\ cd \colyseus-examples-master npm install npm run bundle-colyseus-client npm start Now for the HaXe Part: Download c:> haxe-3.4.4-win65.exe let it create default c:\HaxeToolkit : Setting DOS Environment: set HAXE_PATH=c:\HaxeToolkit\haxe set NEKOPATH=c:\HaxeToolkit\neko set PATH=%HAXE_PATH%;%NEKOPATH%;%PATH% haxelib setup c:\HaxeToolkit\haxe\lib haxelib --global update haxelib haxelib install openfl haxelib run openfl setup haxelib run lime setup lime create HelloWorld cd HelloWorld lime test html5 lime setup windows Download Visual Studio 16 vs_community__956414624.1551197993.exe Choose : Desktop development with C++ AND Click C++/CLI support lime test windows The installed 4.0.8 hxcpp does not contain run script apparantly , so set to original hxcpp: haxelib set hxcpp hxcpp When 'Error : Could not process asset libraries' then do: openfl rebuild tools COLYSEUS Download ZIP from GitHub : Extract the archive to c:\HaxeToolkit so that the project.xml is in c:\HaxeToolkit\colyseus-hx-master Modify the Main.hx: to mkae sure that the localhost line is active and the "ws://colyseus-examples.herokuapp.com"); is commented out this.client = new Client("ws://localhost:2567"); Build the NyanCat example c:> cd \HaxeToolkit\colyseus-hx-master\example\NyanCat C:\HaxeToolkit\colyseus-hx-master\example\NyanCat>lime build project.xml html5 Error: Could not find haxelib "haxe-ws", does it need to be installed? Install haxe-ws 1.0.5 haxelib install haxe-ws lime test html5 It should now produce the nyancat window where you can move around with the cursor keys C:\HaxeToolkit\colyseus-hx-master\example\NyanCat>lime build project.xml windows lime test windows LEAVING ROOM Crash: Now we change the Main.hx to include leaving room this.client = new Client("ws://localhost:2567"); //this.client = new Client("ws://colyseus-examples.herokuapp.com"); this.room = this.client.join("state_handler"); var timer = new haxe.Timer(5000); timer.run = function() { trace(" Leave Room "); this.room.leave(); var timer2=new haxe.Timer(2000); timer2.run = function (){ timer2.stop(); trace(" Create new room ... " ); this.room = this.client.join("state_handler"); }; }; Build and lime test the windows version When you see CLIENT: ERROR you should stop the program and start the program again. Repeat to use lime test windows After a while; approximate < 10 minutes; it will crash After two weeks of further debugging I've now came up with a work around so that it does not crash. These are the modifications that I needed to make to avoid null pointer crashes: SocketSys.hx (haxe/net/impl/SocketSys.hx) // M.E. the match error.custom(error.blocked) caused null pointer crash /* needClose = !(e == 'Blocking' || (Std.is(e, Error) && ( (e:Error).match(Error.Custom(Error.Blocked)) || (e:Error).match(Error.Blocked)) )); */ if((""+e).indexOf("Block") < 0 && (""+e).indexOf("Custom(EOF)") < 0){ trace('closing socket because of $e'); needClose=true; } } And further down I don't close the socket which caused blocking and crashing things on Android if (needClose) { // close(); // M.E. This will cause Android problems } Room.hx : avoid null pointer exception private function patch( binaryPatch: Bytes ) { // apply patch // M.E. Check for null on binaryPatch if(binaryPatch == null || this == null || this._previousState == null){ trace("BINARY PATCH IS NULL or this._previousState == null !"); }else{ this._previousState = FossilDelta.Apply(this._previousState, binaryPatch); // trigger state callbacks this.set( MsgPack.decode( this._previousState ) ); this.onStateChange(this.state); } } FossilDelta.hx in Apply to avoid null pointer exception var total=0; // M.E. Check for null on src if(src == null)return haxe.io.Bytes.alloc(0); if(delta == null) return haxe.io.Bytes.alloc(0); // M.E. End of modification And I needed to inform Stencyl about errors in Room.onError or the extension would still use room functions ... Room.hx connect function this.connection.onError = function (e) { var message="Possible causes: room's onAuth() failed or maxClients has been reached."; if(getGameAttribute("ColyseusErrorMessage") == null)setGameAttribute("ColyseusErrorMessage", ""); setGameAttribute("ColyseusErrorMessage", ""+getGameAttribute("ColyseusErrorMessage")+" "+message); trace(message); this.onError(e); }; In the application I created an onError routine and that would be hit sometimes but then it will do a new client instance and it starts creating rooms again. So we don't close the socket on haxe.net level but detect problems in Room and Client on Colyseus level. Hopefully these changes don't bite me later. I really want to continue working on other methods than Lobby mechanism! Mac OSX, iOS Simulator, iPhone 5, Windows and Android all work without crashes when creating and leaving rooms now. I ran a demo-game that left and created rooms every 10 seconds and they kept working for an hour. Android crashed on one device after an hour. It had done over 100 room creations by that time. I need to keep that in mind when doing more stuff later on and see if it reoccurs. (HTML5 never gave problems as it didn't use haxe.net) This week I've made progress on a simple room where players can be joined and walk around. I made it so that players would be moved simultaneous to see if the server could handle it. It didn't quite run well with 2 or more browser sessions at the same time. Also a windows publication didn't fare well. Yesterday I found out that the CPU on Windows and memory on HTML5 where draining. So today I made a simple version on my NyanCat project. I had it running on both html5 and windows and it had the same effect. Then I stripped colyseus from the nyancat so that only the image was left with controlling keys and there was no problem with CPU. With Colyseus: And the non-colyseus version: So Haxe client is consuming a lot of CPU ! At least it wasn't my client implementation since both have the same problem. I guess I need multiple computers from now on to do tests .... Last days I've worked on refactoring code and CPU investigations. Connection.hx modifications to avoid CPU consumption #if sys while(!this.isClosed){ //M.E. After 200 reconnections the CPU was gone to 92% this.ws.process(); Sys.sleep(0.005); // M.E. Avoid CPU consumption by introducing a really small sleep } #end public function close () { this.isClosed=true; //M.E. add for thread checking this.ws.close(); } Perhaps the this._enqueuedSend mechanism can be altered as well to check for the isClosed state?!?! But I'm not sure what the enqueued mechanism needs this... Running with over more than 200 client initialization tests (and client.closing them) the CPU consumption kept relatively steady. Now the Windows executable doesn't take much of the CPU. Remains Javascript that consumes a lot of memory. But this couldn't easily be reproduced on the HaxeKit version, so it might be something related to other parts of HaXe and Stencyl. I was able to switch rooms on Android, Windows and HTML5 though. I need to test the other targets with the latest build to make sure I didn't break anything for them. Current state: - Initialize Colyseus (tested 200x re-initializations) - Create Room (Lobby, State_Handler) : Also done together with the 200x initializations - Get room list (getAvailableRooms) - Join Room - Leave Room - Player list - Send Data - Receive Data TODO: - Investigate Javascript memory consumption - Publication to all Stencyl targets - Ping / latency mechanism - Broadcast investigations - onAuth investigations - Different Room Types : RawData,TurnBased,LockData,SimplePhysics,Chat - javascript memory consumption was a Stencyl bug. It has to do with drawing stuff, which I obviously did in my test application. Filed a PR on Stencyl Forum for this. - Publications: iPhone Simulator iPhone (Should be fine on iPad as well) Mac OSX Linux Windows Android (Tablet+Phone) HTML5 So all is fine to continue working on the RoomTypes. Hello! do you use haxe only for client code or you have created externs for server as well? Hello Serjek, Currently I'm working on Stencyl Extension which uses (modified) HaXe Colyseus Client code. The server is still running on TypeScript. A HaXe server would be awesome to have but I currently have my hands full with Client Code ! Kind regards from M.E.
https://discuss.colyseus.io/topic/197/stencyl-haxe-extension/4
CC-MAIN-2019-30
refinedweb
4,799
58.48
Releasing Five Products in 2021, Part 1: The Wheel Screener Insights from the frontlines of a SaaS product release: an advanced cash secured put (CSP) and covered call (CC) options screener. This post is mirrored on my Full Stack Blog. This “Products of 2021” series will be a total of six posts. The first is the introduction to the series itself. The five product links will be updated throughout 2021 as I release the products to the world. These links will be pinned to the top of each post in the series. - Introduction and Overview Post - Product 1: The Wheel Screener (This Post!) - Product 2: ReduxPlate (Details Coming Soon!) - Product 3: Mail Your Rep (Details Coming Soon!) - Product 4: ??? (Undetermined) - Product 5: Five Grand Challenges for the Next Five Decades: A Novel (Stretch goal to try and reach by the end of 2021) These product posts will all have the same format for readability. They will always have the same three sections: 1. ‘Product Overview’, where I describe the product itself. 2. ‘Key Takeaways From Launch’, where I discuss everything I’ve learned from before and after launch. 3. ‘Next Steps’, where I mention what I am planning to develop further for the product. Greetings! Hi everyone! You may recognize me from other full stack posts I’ve published here in The Startup relating to specific software challenges, some of which include: Extending React Standard Types to Allow for Children as a Function Sorting or filtering child components? You’ve come to the right place! medium.com C# .NET Core and TypeScript: Using Generics and LINQ to Secure and Filter Operations on Your… Full stack: React with TypeScript frontend, and .NET on the backend! medium.com Magento 2 IP Location Detection (GeoIP) and Store Context Control Using the ipstack API A 20 liner solution to get you started with international stores! medium.com Introducing the Full Stack Typing Boilerplate: Once You ORM, You Can’t Go Back! Featuring Typescript and Sequelize: share types between the front- and back- ends for ultimate development… levelup.gitconnected.com Today, I’m proud to say that I’m able to post here with an actual product of mine, and even prouder to say that it’s my first ever successful and profitable SaaS product!* If you’ve liked my other code-based posts, I hope you’ll read this one, and that you’ll gain some insights into the other side of product development: marketing and the real-world product launch. *Stay tuned, this post here talks about just the first of potentially five products I want to release in 2021! Product Overview The Wheel Screener is a market-wide options screener. Currently, it focuses on both cash secured puts (CSPs) and covered calls (CCs). Each day before the market opens, I retrieve all options contracts across the entire market. I then run calculations on them, filtering out options by certain criteria and scoring the remaining by their estimated return (reward to risk ratio) and their probability of profit, as well as a few other select metrics. Currently, The Wheel Screener has over 100 free subscribers, and 20–30 premium subscribers! The premium subscription is $5/month. Key Takeaways From Launch 1. Customer Value is Everything — Otherwise, What Else is Your Product For? You want a profitable SaaS product? Then release your product. Listen to and implement your customers’ feedback. Repeat. You don’t need super fancy styles. You don’t need ultra-clean code. You don’t need a giant email list. While these things are important in the long run, the most important thing about your product is the straight-up value it provides to the customer. I mean, imagine designing any product where the value it delivers to its customers isn’t the top priority. Then it’s at best a hobby project, or at worst — a bad product. If customer value isn’t the top priority of your product, you should ask yourself, why are you building that product? If customers are happy and getting value out of your product, they will see things like styles and branding for example as an added benefit or afterthought at most, and they certainly won’t see whatever code or framework you are running behind the scenes! At the beginning of the days I have blocked off to work on The Wheel Screener, I ask myself: What feature or feature(s) have been asked about the most? What feature(s) will provide the most value? and then I build those. I don’t worry too much about using the ultimate newest tool or newest software pattern or the cleanest possible code in the whole universe to implement it. The extent to which I worry about the software portion of the project is only to the extent to which the feature or solution is sustainable and maintainable into the future. Yes, “clean enough” is a thing — and there will always be TODOs no matter what feature you build. That’s why they’re called TODOs — you can implement them at any later date. 2. Customers and Early Adopters are Awesome I knowingly released The Wheel Screener well before it was a tried-and-true alpha product — it was really more like a beta release to the public — but I didn’t want to fall into the infinite loop of “let me just implement this feature, then I’ll release it.” which has plagued me far too often with other projects. No, I wanted to get it out there for product validation, because, as I’ll discuss below, true believers or customers of your product won’t mind a few bugs here and there, especially when you are clear that it is a solo project. To this point’s title, the early adopters have been awesome. All of them have been understanding and many have pointed out sneaky bugs that I was able to fix only with their help! I was truly worried that paying customers might get angry and unsubscribe forever by just finding one bug. But so far, I have yet to see that happen. So, all you subscribing customers out there, you are awesome, and thanks for all the help so far! I hope I can return the favor to the best of my abilities with some awesome features I have planned! 3. Google Ads are Expensive Currently, I have a small Google Ad campaign running for The Wheel Screener — I’ve yet to do an in-depth analysis on its performance — perhaps it will be a blog post to come. Compared to the impressions that Instagram cites, these Google Ads are relatively more expensive (in terms of cost per click or impression). However, I haven’t done a side-by-side comparison — perhaps that is also a blog post to come. 4. Marketing is Difficult, Or At Least Not Free Though I’ve been hearing it for years, it still seems like proper marketing is still a major pain point for indie hackers, solo entrepreneurs, and SaaS folks in general. I think the hard part is finding marketing without diving into the whole advertisement universe — people want to find a way to do it for free, or at least at very low cost. Perhaps there really is no good free marketing out there? I’m not an expert on this, and time will tell. I’m hoping to improve and build my marketing skills in parallel throughout this year to figure out how to best market products efficiently. 5. Beware of Subreddit Rules! My success so far with The Wheel Screener has come at a sad cost to my account on Reddit. My warning to future founders, product launchers, and marketers is this: Beware of ‘self-promotion posts’ on various subreddits! You may get permanently banned for self-promotion even if your product has a totally free part, and even if that free part is totally generous! If you have any reference to a payment plan or revenue model, your post may be deleted and you may be banned! I face this sad fate in two communities I’ve loved in the past — both /r/algotrading and /r/wallstreetbets, and it doesn’t look like either of those ruling will be reversed 😞. Furthermore, I was looking at using Reddit’s ad platform, and it seems like those massive communities like /r/wallstreetbets aren’t available for advertising? Does anyone have insight into this? I would love to learn why certain subreddits don’t seem to appear in Reddit’s ad platform. 6. A Staging or Testing Environment is Essential This one is rather painful, but is really a requirement for anybody serious in releasing a SaaS product. Alongside the production site of The Wheel Screener, I have a staging site for it: staging.wheelscreener.com. All the backend is in a staging environment as well — Stripe is in its test mode, PayPal is in its Sandbox mode, and even the PostgreSQL database has its own staging version of the database. Last week when I released a flurry of new features, I uncovered numerous bugs that were only revealed in such a “production-like” but “not quite production” environment — I didn’t see them on my development machine. I guarantee this is true for any SaaS product that anyone wishes to build, and so a real staging/test site is essential to catch these bugs before they move into the production product. I do have a Bitbucket Pipelines Course which gets into this develop-staging-master chain as a part of teaching the greater BitBucket Pipelines environment, and you can also read my blog post on Bitbucket Pipelines as a brief overview of everything covered in that course. I plan to eventually release a course on how to set up a fully-automated dynamic staging and master environments, complete with separate databases, API keys, and so on — likely with a .NET backend and React TypeScript frontend. If you do some careful configuration and scripting, the only work you should need to do as a dev is merge your code to staging or master for the environment to configure and build itself. It is essential to learn how to do this if you are an indie hacker, maker, or solo founder. It saves an infinite amount of time and can catch an infinite amount of bugs. Next Steps For This Product I’ve got something really big planned for The Wheel Screener, with goals to make it a major player in the (unfortunately?) limited niche of options trading tools. What I have planned specifically may be the first of its kind ever in terms of e-brokerage tools. 😉 In short, it’s gonna be awesome! 🚀 While this first post in my “Products of 2021” series has come to an end, I encourage you to bookmark the blog on The Wheel Screener’s Blog — unfortunately at the time, bookmarking and occasionally checking the blog is the only way to find updates there. I’m still working on implementing a real email subscription there. (Remember what I said about customer value vs. building certain features? 😄) Until next time! Cheers 🍺 -Chris
https://chrisfrewin.medium.com/releasing-five-products-in-2021-part-1-the-wheel-screener-d137f4411da?source=post_internal_links---------5----------------------------
CC-MAIN-2021-17
refinedweb
1,865
69.41
Here is a listing of C++ interview questions on “Integer Types” along with answers, explanations and/or solutions: 1. The size_t integer type in C++ is? a) Unsigned integer of at least 64 bits b) Signed integer of at least 16 bits c) Unsigned integer of at least 16 bits d) Signed integer of at least 64 bits View Answer Explanation: The size_t type is used to represent the size of an object. Hence, it’s always unsigned. According to the language specification, it is at least 16 bits. 2. What is the output of the following program? #include <iostream> using namespace std; int main() { int x = -1; unsigned int y = 2; if(x > y) { cout << "x is greater"; } else { cout << "y is greater"; } } a) x is greater b) y is greater c) implementation defined d) arbitrary View Answer Explanation: x is promoted to unsigned int on comparison. On conversion x has all bits set, making it the bigger one. 3. Which of these expressions will return true if the input integer v is a power of two? a) (v | (v + 1)) == 0; b) (~v & (v – 1)) == 0; c) (v | (v – 1)) == 0; d) (v & (v – 1)) == 0; View Answer Explanation: Power of two integers have a single set bit followed by unset bits. 4. What is the value of the following 8-bit integer after all statements are executed? int x = 1; x = x << 7; x = x >> 7; a) 1 b) -1 c) 127 d) Implementation defined View Answer Explanation: Right shift of signed integers is undefined, and has implementation-defined behaviour. 5. Which of these expressions will make the rightmost set bit zero in an input integer x? a) x = x | (x-1) b) x = x & (x-1) c) x = x | (x+1) d) x = x & (x+1) View Answer Explanation: None. 6. Which of these expressions will isolate the rightmost set bit? a) x = x & (~x) b) x = x ^ (~x) c) x = x & (-x) d) x = x ^ (-x) View Answer Explanation: None. 7. 0946, 786427373824, ‘x’ and 0X2f are _____ _____ ____ and _____ literals respectively. a) decimal, character,octal, hexadecimal b) octal, hexadecimal, character, decimal c) hexadecimal, octal, decimal, character d) octal, decimal, character, hexadecimal View Answer Explanation: Literal integer constants that begin with 0x or 0X are interpreted as hexadecimal and the ones that begin with 0 as octal. The character literal are written within ”. 8. What will be the output of this program? #include <iostream> using namespace std; int main() { int a = 8; cout << "ANDing integer 'a' with 'true' :" << a && true; return 0; } a) ANDing integer ‘a’ with ‘true’ :8 b) ANDing integer ‘a’ with ‘true’ :0 c) ANDing integer ‘a’ with ‘true’ :1 d) None of the mentioned View Answer Explanation: None. 9. What will be output of this program? #include <iostream> using namespace std; int main() { int i = 3; int l = i / -2; int k = i % -2; cout << l << k; return 0; } a) compile time error b) -1 1 c) 1 -1 d) implementation defined View Answer Explanation: Sign of result of mod operation on negative numbers is sign of the dividend. 10. What will be output of this function? int main() { register int i = 1; int *ptr = &i; cout << *ptr; return 0; } a) 0 b) 1 c) Compiler error may be possible d) Runtime error may be possible View Answer Explanation: Using & on a register variable may be invalid, since the compiler may store the variable in a register, and finding the address of it is illegal. Sanfoundry Global Education & Learning Series – C++ Programming Language. Here’s the list of Best Reference Books in C++ Programming Language. To practice all features of C++ programming language, here is complete set on 1000+ Multiple Choice Questions and Answers on C++.
https://www.sanfoundry.com/interview-questions-answers-c-plus-plus-integer-types/
CC-MAIN-2018-30
refinedweb
627
57.61
#include "Cosa/Pins.hh"#include "Cosa/Event.hh"#include "Cosa/Driver/NEXA.hh"#include "Cosa/RTC.hh"#include "Cosa/Watchdog.hh"class LED : public NEXA::Receiver::Device {private: uint8_t m_pin;public: LED(uint8_t pin) : NEXA::Receiver::Device(0L), m_pin(pin) { pinMode(m_pin, OUTPUT); } virtual void on_event(uint8_t type, uint16_t value) { digitalWrite(m_pin, value); }};NEXA::Receiver receiver(Board::EXT0);LED device(LED_BUILTIN);void setup(){ RTC::begin(); Watchdog::begin(); NEXA::code_t cmd; receiver.recv(cmd); device.set_key(cmd); receiver.attach(&device); receiver.enable();}void loop(){ Event event; Event::queue.await(&event); event.dispatch();} #define LED 9 //LED on digital pin 9 I decided to try using Cosa with a couple NRF24L01+ I had lying around and got them communicating fine using the ping-pong example and the client-server example. However, both examples stop receiving when I go out of range and don't resume communications when I get in range again. Is there an easy way to modify any of these examples to provide a range/coverage testing application that tells me when packets are lost and when they reach the destination? 0xc052:147:sample:21.43:21.43:3.8310xc052:148:sample:21.43:21.43:3.8310xc052:149:sample:21.43:21.43:3.8310xc052:150:sample:21.43:21.43:3.8310xc052:152:sample:21.43:21.43:3.831 Sorry for spamming... I just managed to modify the ping-pong example to do what I wanted. Turns out the range is quite ok on these guys although some packets get lost. Is there a mechanism to auto re-transmit until successful or would I have to implement this myself by sending back an ACK packet (and re-transmit until I get that ACK)? It would be great if you could support wireless programming over nRF24L01+, is that something you would consider integrating into Cosa? Great answers as always Seems there is some work left with the nRF24L01+ networking then but nice to hear that it seems to be in the near future of your pipeline. Another thing that would be awesome is to have dynamic mesh network support for nRF24L01+. Regarding RF433 I have had the feeling that nRF24 is more reliable and has slightly better range (without testing myself). And since they cost the same ($1.2) on ebay I just never ordered any RF433. Also the nRF24 doesn't need two whip antennas..I'm all for cheap hardware but when you can get a Arduino Pro Mini for $3.75 on ebay it almost seems like a bit of a waste of time to try and make all of Cosa work on a ATtiny.. Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=150299.msg1364109
CC-MAIN-2016-22
refinedweb
475
53.31
0 Hello, I'm trying to approach .dll's. Tried some tutorials and got into 2 problems: 1. Using rundll32 from the command line is not working as expected. I write in cmd: "rundll32 mydll.dll MsgBox -32". It shows the MessageBox with text "15991798", changing every time I run it. 2. Calling a dll function from another program gives an error and closes: "dllUser.exe has stopped working. A problem caused the program to stop working correctly. Please close the program.". In console: "Process returned -1073741819 (0xC0000005) execution time : 70.894 s Press any key to continue." now here is my dll function: extern "C" int DLL_EXPORT MsgBox(int someValue) { //cout << "Value is : " << someValue << endl; string intStr; stringstream ss; // ss << someValue; intStr=ss.str(); MessageBox(0, std::string(intStr).c_str() , "DLL Message", MB_OK | MB_ICONINFORMATION); return someValue; } and this is how I call it from dllUser.exe: #include <iostream> #include <windows.h> using namespace std; typedef int (*MsgFunction)(int); HINSTANCE hinstDLL; int main() { MsgFunction MsgBox(0); hinstDLL = LoadLibrary("MyDLL.dll"); if(hinstDLL!=0) { MsgBox=(MsgFunction)GetProcAddress(hinstDLL,"MsgFunction"); } if(hinstDLL==0) cout << "NULL MsgBox\n"; int x=MsgBox(5); if(x==5) { cout << "Message!!\n"; } string s="Works!"; MessageBoxA(NULL, s.c_str(),"Title",MB_OK); FreeLibrary(hinstDLL); return 0; } I may miss something here, so please bring some light in my head :)
https://www.daniweb.com/programming/software-development/threads/471729/please-help-with-dll-2-problems
CC-MAIN-2017-26
refinedweb
221
53.68
The AIDL language is loosely based on the Java language. Files specify an interface contract and various data types and constants used in this contract. Package Every AIDL files starts with an optional package that corresponds to the package names in various backends. A package declaration looks like this: package my.package; Similar to Java, AIDL files must be in a folder structure matching their package. Files with package my.package must be in the folder my/package/. Types In AIDL files, there are many places where types can be specified. For an exact list of types that are supported in the AIDL language, see AIDL backends types. Annotations Several parts of the AIDL language support annotations. For a list of annotations and where they can be applied, see AIDL annotations. Imports To use types defined in other interfaces, you must first add dependencies in the build system. In cc_* and java_* Soong modules, where .aidl files are used directly under srcs in Android platform builds, you can add directories using the field aidl: { include_dirs: ... }. For imports using aidl_interface, see here. An import looks like this: import some.package.Foo; // explicit import When importing a type in the same package, the package can be omitted. Though, omitting the package can lead to ambiguous import errors when types are specified without a package and put in the global namespace (generally all types should be namespaced): import Foo; // same as my.package.Foo Defining Types AIDL files generally define types which are used as an interface. Interfaces Here is an example AIDL interface: interface ITeleport { void teleport(Location baz, float speed); String getName(); } An interface defines an object with a series of methods. Methods can either be oneway ( oneway void doFoo()) or synchronous. If an interface is defined as oneway ( oneway interface ITeleport {...}), then all methods in it are implicitly oneway. Oneway methods are dispatched asynchronously and cannot return a result. Oneway methods from the same thread to the same binder are also guaranteed to execute serially (though potentially on different threads). For a discussion of how to setup threads, see AIDL backends thread management. Methods can have zero or more arguments. Arguments to methods can be in, out, or inout. For a discussion of how this affects arguments types, see AIDL backends directionality. Parcelables For a description of how to create backend-specific parcelables, AIDL backends custom parcelables. Android 10 and higher support parcelable declarations directly in AIDL. For example: package my.package; import my.package.Boo; parcelable Baz { @utf8InCpp String name = "baz"; Boo boo; } Unions Android 12 and higher support union declarations. For example: package my.package; import my.package.FooSettings; import my.package.BarSettings; union Settings { FooSettings fooSettings; BarSettings barSettings; @utf8InCpp String str; int number; } Enums Android 11 and higher support enum declarations. For example: package my.package; enum Boo { A = 1 * 4, B = 3, } Nested Type Declarations Android 13 and higher support nested type declarations. For example: package my.package; import my.package.Baz; interface IFoo { void doFoo(Baz.Nested nested); // defined in my/package/Baz.aidl void doBar(Bar bar); // defined below parcelable Bar { ... } // nested type definition } Constants Custom AIDL interfaces, parcelables, and unions can also contain integer and string constants, such as: const @utf8InCpp String HAPPY = ":)"; const String SAD = ":("; const byte BYTE_ME = 1; const int ANSWER = 6 * 7; Constant Expressions AIDL constants, array sizes, and enumerators can be specified using constant expressions. Expressions can use parentheses to nest operations. Constant expression values can be used with integral or float values. true and false literals represent boolean values. Values with a . but without a suffix, such as 3.8, are considered to be double values. Float values have the f suffix, such as 2.4f. An integral value with the l or L suffix indicates a 64-bit long value. Otherwise, integrals values get the smallest value-preserving signed type between 8-bit (byte), 32-bit (int), and 64-bit (long). So 256 is considered to be an int, but 255 + 1 overflows to be the byte 0. Hex values, such as 0x3, are first interpreted as the smallest value-preserving unsigned type between 32-bit and 64-bit and then reinterpreted as unsigned values. So, 0xffffffff has the int value -1. Starting in Android 13, the suffix u8 can be added to constants, such as 3u8, to represent a byte value. This suffix is important so that a calculation, such as 0xffu8 * 3, is interpreted as -3 with type byte whereas 0xff * 3 is 765 with type int. Supported operators have C++ and Java semantics. In order from lowest to highest precedence, binary operators are || && | ^ & == != < > <= >= << >> + - * / %. Unary operators are + - ! ~.
https://source.android.com/docs/core/architecture/aidl/aidl-language?hl=nb
CC-MAIN-2022-40
refinedweb
772
59.6
Glue is a project I've been working on to interactively visualize multidimensional datasets in Python. The goal of Glue is to make trivially easy to identify features and trends in data, to inform followup analysis. This notebook shows an example of using Glue to explore crime statistics collected by the FBI (see this notebook for the scraping code). Because Glue is an interactive tool, I've included a screencast showing the analysis in action. All of the plots in this notebook were made with Glue, and then exported to plotly (see the bottom of this page for details). from plotly.tools import embed from IPython.display import VimeoVideo, HTML from glue import qglue import pandas as pd Glue is an application that sits on top of matplotlib, and lets you interactively build standard statistical graphics like scatter plots, histograms, and images. However, all of these plots are "brushable" -- you can select a region on any plot, and that region is used to define a data filter. These filters are automatically displayed across all plots, making it easy to isolate subtle features and put them in context of the rest of the dataset. Getting dataframes into Glue is pretty easy: states = pd.read_csv('state_crime.csv') qglue(states=states) DataCollection (1 data set) 0: states This cell will load this dataframe into Glue and bring up the user interface. Here's a screencast showing what the subsequent exploration might look like: VimeoVideo('97436621', width=700) Here's one of the simplest views of the dataset you can make: the murder rate (all rates in the dataset are annual rates per 100,000 people) as a function of time, for all states. embed('ChrisBeaumont', 36) There is an obvious set of outlier points with high murder rates -- what's going on there? Glue is really great at isolating outliers, and putting them in context. For example, we can select these points to highlight them, and look at another slice of the data -- Murder rate vs state. embed('ChrisBeaumont', 37) All of these points belong to a single "state" -- Washington, D.C.. Now, D.C. is an outlier for one obvious reason -- it's a single urban area, and thus should really be compared to other cities. Still, this murder rate is remarkably high. Furthermore, it has an interesting time dependence -- the 90s were a terrible decade for crime in D.C., when it earned the nickname of the "Murder Capital of the United States." It turns out there is an entire Wikipedia Page about crime rates in D.C.. The high murder rates were driven by the spread of crack cocaine, combined with an affluent exodus out of the city and into the suburbs. Since the 90s, Gentrification and economic projects have pushed murder rates back down. Glue's basic workflow of brusing and inter-comparing several plots is surprisingly versatile. For example, one way to tease out the crime trends of each state over time is to color the first and last year of data. Here are some plots that do that, to examine the rate of rape in each state. embed('ChrisBeaumont', 38) The trends for murder and rape are quite a bit different -- notice that, while murder rates have slowly declined over the past 50 years, rates of sexual assault have increased. Interpeting sexual assault statistics, it turns out, is tricky business. Because or rape's social stigma, it is one of the most underreported crimes. Furthermore, that stigma has decreased somewhat over time as society has become more femenist. Thus, the increase in these plots might be driven more by higher rates of reporting rape as opposed to higher rates of rape itself. The National Crime Victimization Survey (which is based on surveys rather than reported crimes) reports that the victimization rate from rape has actually decreased by 85% since the 80s. There is a large state-to-state variation in sexual assault rates in the dataset -- South Dakota, for example, shifted from having one of the lowest rates in 1960 to one of the highest rates today. I'm not sure what drove that trend (but it's very troubling). embed('ChrisBeaumont', 39) embed('ChrisBeaumont', 40) Glue is designed to quickly build up intuition about multidimensional data, so that you can spend more time following up on interesting questions. Glue makes it very easy to identify and isolate subtle and/or irregular features in datasets, by selecting and coloring subregions of plots. However, these features are just clues about the underlying story the data are telling. More precise followup analysis is always needed to quantify trends and assemble scientific hypotheses about the data. Glue is not designed to perform this followup analysis. In fact, my opinion is that graphical interfaces are often the wrong approach here -- programming languages offer more precision for expressing specific computations, and are better suited to this task. For example, with Pandas we can obtain a precise measurement of the change in, say, the murder rate for each state over the past 50 years: murder_change = (states.sort('Year') .groupby('State').Murder .agg({'first':'first', 'last':'last'})) murder_change['change'] = murder_change['last'] - murder_change['first'] murder_change = murder_change.sort('change', ascending=False) print 'Largest Increases in Murder Rate (change per 100,000)' murder_change.head(10) Largest Increases in Murder Rate (change per 100,000) Glue isn't a replacement for writing code -- it's a tool that quickly gives you clues about what questions are interesting, and worth writing code for. All the plots in this document were created with Glue, and then exported to plotly. Uploading plots from Glue to plotly is a 2-click affair: File->Export->plotly This will open a new browser window with your uploaded graph, which you can further tweak or share with the world. If you want to upload plots to your own plot.ly account, you can fill in your username and API key under File->Edit Settings #This makes everything pretty def css_styling(): styles = open('custom.css', 'r').read() return HTML(styles) css_styling()
http://nbviewer.jupyter.org/github/ChrisBeaumont/crime/blob/master/glue_plotly_fbi.ipynb
CC-MAIN-2018-51
refinedweb
1,005
60.35
Point. THE IDEA People generally express their opinion regarding their purchase in form of reviews on e-commerce websites or in form of web blog posts. A simple solution to learn people’s interests is to extract information from these reviews regarding user’s interest and whenever the user is in a new geographic location he/she can be notified about the nearby interesting places based on his/her review history. A common approach to this problem is to process the reviews for opinion mining/sentiment analysis. Opinion mining can be inefficient for extraction of the point of interests for an user. With sentiment analysis we can only identify a review as positive or negative whereas possible interest of the user remain unidentified. A better approach is to use some algorithms for topic modeling such as Latent Dirichlet Allocation(LDA) and newly introduced word vector models(Word2vec and GloVe) for the problem of POI generation. Latent Dirichlet Allocation(LDA) LDA is a topic modelling technique that determines how the documents in a database are created. This generative model also describes how words in a documents are generated. In simple words LDA works like K-Means clustering, here the points in a clusters are the words. Each cluster constitutes a topic and the K clusters are essentially the K unique topics or the POIs. For LDA, I will use the Gensim package in Python. Prepare the dataset as follows: data = ["When I moved to Charlotte, Eastland Mall was not a bad place. &amp;amp;amp;amp;amp;amp;amp;nbsp;Perhaps not great, but really not all that bad." "I had heard really good things about Bubba's", "While I like the Melting Pot, I just can't help but be amazed at the place." "The ambiance of The Capital Grille is excellent."] I have taken these reviews from Yelp dataset which consists of reviews related to food. As you can see each review contains a lot of unnecessary words like I, had, is, etc. These extraneous words contains no information regarding POI. For this purpose we extract all the nouns from the reviews and train the models with the datasets having this modification. The next step is to construct a document term matrix which will be used to create a bag of word model. This can be done with following lines from gensim import corpora, models dictionary = corpora.Dictionary(data) corpus = [dictionary.doc2bow(lines) for lines in data] Finally, to use the LDA module use the following command ldamodel = gensim.models.ldamodel.LdaModel(corpus,num_topics=3,id2word = dictionary,passes=20) Where, num_topics are the number of POIs and passes is the number of times we want to iterate through the dataset for training the LDA model. The results by the LDA can be seen as print(ldamodel.print_topics(num_topics=2, num_words=6)) 0.032*chicken + 0.031*place + 0.027*love + 0.018*taco + 0.017*meat + 0.015*salad 0.043*pizza + 0.039*place + 0.032*burger + 0.029*location + 0.024*order + 0.021*time With the help of LDA we can obtain the clusters but being a unsupervised technique the label corresponding to each cluster (the POI) is to be given manually. Thus, for the above results we can assign the first POI as non-veg and the second POI as fast food. Similarly, we can generate as many POI as we like as long as LDA is able to generate well defined clusters. The results can also be illustrated with the help of word clouds. The word clouds gives weightage to the terms that form a particular topic. Larger weight of a term is shown by large word size. PROBLEM WITH LDA With the help of LDA we can generate POIs as topics but these topics or clusters are not disparate from each other. By this I mean that some words are present in both of the clusters, thereby causing ambiguity. Words like place, service, salad, etc. are present in both of the topics with noticeable weight. For now the number of topics are taken to be 2 and we can see the problem even with such a smaller value. Lets try word vector technique for POI generation. WORD VECTOR Word2vec models represents distribution of word vectors in low dimensional space, as given in Efficient Estimation of Word Representations in Vector Space. Word2vec is a recurrent neural network based model which tries to learn lower dimensional vector representation from the given vocabulary. In simple terms word vector models represents each word with a vector and we train a neural network on the dataset of sentences (here reviews) that can learn the contextual relationship among words. Words like pizza, burger and crust are contextually closer than bear, wine and brews. With the help of word vector model it is possible to update the vectors of all these six words in such a way pizza, burger and crust lie in one cluster and the rest lie in other. The first cluster can be given fast food as POI and the second cluster clearly lies in drinks category. Prepare the dataset as data = [['I', 'had', 'heard', 'really', 'good', 'things', 'about', "Bubba's"], [['While', 'I', 'like', 'the', 'Melting', 'Pot,', 'I', 'just', 'cant', 'help', 'but', 'be', 'amazed', 'at', 'the', 'place.']] ['The', 'ambiance', 'of', 'The', 'Capital', 'Grille', 'is' ,'excellent']] The Word2vec model is also present in the Gensim module. Use the following command for training a Word2vec model. model = gensim.models.Word2Vec(data,min_count=1,size=200,workers=6) The size parameter is the dimension of the word vector and workers are the number of multicores that you want to use. For multicore the machine should support hyperhreading and multicore support. You can check the similarity between two words using the similarity parameter provided in the Gensim module. Internally similarity module computes the cosine similarity between two word vectors. model.similarity('pizza','burger') 0.73723527 model.similarity('wine','beers') 0.75213573 model.similarity('pizza','wine') 0.03872634 As you can see the word vectors of pizza and burger, wine and beer, are closer as compared to pizza and wine. Thus the Word2vec is able to capture the underlying context among words. Since each word is now represented by a vector therefore it is possible to visualize the word vectors with the help of scatter plots. For this we need to reduce the dimension of all the vectors with the help of PCA. I will set the number of dimension to 3 so that it is possible to plot the word vectors. The 3D plot shows the clusters formation by the vectors generated by Word2vec. We can manually label these categories as Snacks, Drinks and Services. The three categories as shown in above figure represents the three POIs from the Yelp restaurant dataset. The results of the Amazon review dataset for movie category is given by figure below The categories that are manually labelled are Sci fi, Drama and Romance. Similarly, the results of the book dataset is given in the following figure where the categories are Drama, Sci fi and, History and Mythology. CONCLUSION In this posts we have seen the problem of Point of Interests generation based on user reviews. We compared the results of LDA with Word2vec and saw that the results of word vector models are better in quality. WHAT’S NEXT? Here, I have not shown the problem related to word vector models. While experimenting with these models I came across an issue related to the update of the vectors. I will include the details related to the problem in a future post.
https://deeplearn.school.blog/2017/01/02/point-of-interest/
CC-MAIN-2017-43
refinedweb
1,266
53.31
Name API — Functions Synopsis #include <cyg/io/virtio.h> cyg_uint32 cyg_vio_avail(cyg_vio_driver *driver, int queue ); cyg_bool cyg_vio_queue_ready(cyg_vio_driver *driver, int queue ); cyg_bool cyg_vio_submit(cyg_vio_driver *driver, cyg_vio_tfr *tfr ); void cyg_vio_poll(cyg_vio_driver *driver ); void cyg_vio_driver_init(cyg_vio_driver *driver ); void hal_vio_init(void ); Description This API is intended to be used by client drivers to access the VirtIO device and provide the functionality expected of a driver of the given class. Functions cyg_vio_avail() and cyg_vio_queue_ready() test the state of the queue. The first returns the number of buffer descriptors available for transfer; it can be used to check that there is enough resource to start a transfer before submitting it. The second is used to check that a queue has completed initialization. The function cyg_vio_submit() submits a transfer to the VirtIO device. All fields in the transfer should be initialized before submission. If the transfer is successfully queued, this function returns CYG_VIO_DONE. If the submission fails, a non-zero error code is returned. The function cyg_vio_poll() polls a given driver for completed transfers. If a transfer is complete, then its callback function is called. Calling this poll routine is the only way in which transfer completions are recognized. It is the responsibility of the client driver to arrange to call it. This may be done from a thread context if that exists, or may be done from a DSR if the device interrupt has been enabled. When a callback is called, the only fields in the transfer that will have been updated are the head and actual fields; so the transfer may be immediately resubmitted to the driver from the callback with no changes if the same transfer is to be repeated. The function cyg_vio_driver_init() is called to initialize the common parts of a VirtIO driver. This function will perform startup negotiation with the hypervisor device and initialize all valid queues. On return, the driver will be ready for submission of transfers. If this function is called for a driver that has already been initialized, it will return immediately, so it may be called from multiple locations safely. The function hal_vio_init() is not supplied by the VirtIO package but is expected to be defined by the variant or platform HAL. The VirtIO package calls this function from a constructor during initialization. This function is responsible for detecting any VirtIO devices, installing base address and interrupt vector values and calling cyg_vio_driver_init() for each. Detection may involve searching a memory area for valid VirtIO devices or scanning a PCI bus. If the VirtIO devices are at known fixed addresses then this function should just call cyg_vio_driver_init() for each device to be initialized.
https://doc.ecoscentric.com/ref/virtio-api.html
CC-MAIN-2020-34
refinedweb
435
53.51
If you need to learn how to use the Python Programming Language to implement your own Machine Learning solution, and you 1,194 228 2MB English Pages 144 [90] Year 2020 If you need to learn how to use the Python Programming Language to implement your own Machine Learning solution, and you 861 154 2MB Read more 1,783 372 4MB Read more Have you always wanted to jump into the exciting world of Python programming and Machine Learning but didn’t know where 562 87 5MB Read more 673 111 3MB Read more 745 52 2MB Read more 1,613 483 936KB Read more 886 189 246KB Read more 2,118 620 7MB Read more 2,235 740 1MB Read more Python Machine Learning A Step-by-Step Guide to Scikit-Learn and TensorFlow (Includes a Python Programming Crash Course) Copyright © 2019 All rights reserved. No part of this book may be reproduced or used in any manner (electronic or mechanical, including photocopying, recording, physical and online storage) without written permission of the publisher or the author except for the use of quotations in a book review. All efforts have been executed to provide complete and useful information, but no implicit or explicit warranties are provided about the information included. The content will not replace the consultation of a licensed professional. Introduction Congratulations on buying Python Machine Learning and thank you for doing so. The following chapters will discuss the things that you need to know to take machine learning and use it in your business or on your next project. And when you combine the different ideas that come with machine learning, and some of the different algorithms, with the Python coding language and the different libraries that come with it, you will find that it is possible to really get some of the complicated tasks done with ease. This guidebook is going to start out with a good introduction to machine learning to help us understand what it is about, some of the options that you can do with machine learning as a beginner, the reasons to use or even learn about machine learning, and how machine learning and artificial intelligence are the same and how they are different. This gives us a general introduction to what this process is about and how we will be able to use it as we progress through this guidebook. From there, we are going to explore a bit about the Python language with a crash course in coding in this language. For those who have never been able to learn Python, or who want to jump right into machine learning without all of the studying of a new language along the way, this part is the one for you. We are going to look at some topics like what Python is and how to download it on the different operating systems out there, how to write conditional statements, how to raise and manage your own exceptions, the OOP and functions, and even some of the other basic parts of a Python code. In the third section of this guidebook, we are going to take a look at our very first Python library and what you can do with it when it comes to machine learning. We will explore a bit about Scikit-Learn and what this library can do, before diving into some of the supervised and unsupervised machine learning that you are able to do with this kind of library. To finish out this guidebook, we are going to explore our second Python library of TensorFlow. There are some neat things that you can do with machine learning when it comes to TensorFlow that you are not able to do with any other coding library, so it is definitely a section you will want to check out. Inside we are going to explore the TensorFlow library, how to work with High level and are going to explore the TensorFlow library, how to work with High level and low-level APIs, and how to handle estimators. There is so much that you can do when we talk about machine learning, and the Python coding language makes it that much easier for everyone to get started. when you are ready to learn more about Python machine learning and how to get started with some of your own projects today, make sure to check out this guidebook to help you out! Part 1: An Introduction to Machine Learning Chapter 1: What is Machine Learning The first topic that we need to take a look at in this guidebook is machine learning. This is basically a process where you are trying to teach a computer or another machine how to use its own experiences with a particular user, and some of the things it has seen in the past, to help it perform even better in the future. There are a lot of examples of how this can work, such as voice recognition devices, and even with search engines. As we go through this guidebook, you will find that there are a lot of different methods and algorithms that you can use with machine learning in order to get the machine to learn, but the one you choose really depends on the kind of results you want to get and the project that you decide to work with. Machine learning is going to be a method of data analysis that is able to automate the process of building analytical models. It is also a branch of artificial intelligence that is going to be based on the whole idea that a system is able to learn from the data it is presented, it can identify the patterns that are there, and it is even able to make its own decisions without a lot of intervention from humans in the process. Because of all the new computing technologies that are out there, machine learning, as we know it today, is not really the same as the machine learning that we see in the past. It was born out of a form recognition for patterns and the idea of how a computer is able to actually learn, without a programmer there, to ensure it performs a task specifically. Researchers who were interested in some of the things that we are able to do with artificial intelligence wanted to also see if their machines were able to learn from data that it was fed. The iterative aspect that comes with this machine learning should be seen as an important programming tool because as we expose any of the models we create from this learning to new data, the model is then able to adapt on their own and independently. The machine is going to be able to learn what has happened to it in the past, and the examples it was given, in order to make accurate and reliable predictions in the future. In recent years, there has been a resurgence in the amount of interest that is out there with machine learning thanks to a few different factors. In particular, some things like Bayesian analysis and data mining are growing in popularity as well, and in the process, machine learning is going to be used more now than ever before. before. All of these things mean that it is now easier and faster in order to automatically produce models with machine learning. And these models are now able to analyze bigger and more complex data, while also delivering results faster and results that are more accurate, even when this is done on a very large scale. And because all of this is able to come together and build models that are more precise, and organization is going to set itself up for identifying profitable opportunities better than before, while also avoiding more of those unknown risks ahead of time. This all comes together to help a company to become more competitive in the market. There are a few things that need to come together in order to make sure that the system you use in machine learning is actually good. Some of these will include: 1. Ensemble modeling 2. Scalability 3. Iterative and automation processes 4. Algorithms, a good combination of basic and advanced ones 5. Data preparation capabilities. The neat thing about working with machine learning is that almost every industry is able to use it. And it is still relatively new when it comes to the world of technology, so even the amazing things that have been done with it so far is just the beginning, and it is believed that this kind of technology is going to be able to do even more things in the process. Machine learning is likely to grow quite a bit as time goes on. Right now, a lot of companies are using it in order to figure out what the data they are receiving is telling them, to figure out how they are able to make better business decisions over time, rather than having to make the decisions on their own, and to find some of the patterns that are hidden in the data, and that a human would not be able to go through. But this is just the start of what we are able to do when it comes to machine learning. There are a ton of other applications, and what we are able to do with this right now is just the beginning. As more people and developers start to work with machine learning and start to add in some of the Python languages with it, it is likely that more and more applications are going to be available as well. Most of the industries that are out there that re already working with large Most of the industries that are out there that re already working with large amounts of data are going to be able to recognize the kind of value that they would get with using the technology that comes with machine learning. By being able to actually get through this data and glean some good insights from it, and being able to do this close to real-time, the company is then able to work in a more efficient manner in order to gain a big advantage of others in their same industry. And this is the beauty of working with machine learning. We are able to do things that may have seemed impossible in the past are possible now with the help of machine learning. Businesses that are handling more data than ever before are finding the value of working with machine learning to help them get their work done. They can get through this information faster than would be possible with a person looking through it on their own and can give them that competitive edge over others. There are a lot of different companies that will be able to benefit from a program that can run on machine learning. Some of the different industries that are already using this kind of technology will include financial services, government, health care, retail, oil and gas, transportation, and more. Machine learning is similar to artificial intelligence that is going to allow a computer to learn, similar to what we are seeing with the human mind as well. With a minimal amount of supervision from a person, the machine will be able to automate a lot of tasks, find the information that you want, and get to some insights and predictions that you may not be able to find in other methods on your own. And this guidebook is going to spend some time looking at how you are able to do this type of machine learning with the help of the Python coding language so you can start some of your own projects in no time. Chapter 2: The Different Types of Machine Learning Now that we have had a chance to learn a bit about machine learning and how it can work well for your needs, it is time to take a step back and focus on some of the different ways that you can work with machine learning. There is more than just one algorithm for machine learning out there that you are able to choose based on what kind of project you choose to work with. But these algorithms can be sorted out into three main categories to help us understand how they work a bit better, and when we are likely to use them for our needs. The three main types of machine learning that you are able to use include supervised machine learning, unsupervised machine learning, and reinforcement machine learning. Each of these will work in a slightly different way in order to make sure that the computer or another machine knows how to learn and collect the information that is needed. So, let’s take some time to explore the different types of machine learning and how they work! Supervised The first type of machine learning that we are going to explore is the idea of supervised machine learning. This type of learning is going to happen when you are able to choose an algorithm that is going to learn the response that is the correct one based on the input that the user gives to it. There are several ways that supervised machine learning can do this. It can look at examples and other targeted responses that you provide to the computer. You could include values or strings of labels to help the program learn the right way to behave. This is a simple process to work with, but an example to look at is when a teacher is teaching their students a new topic, and they will show the class examples of the situation. The students would then learn how to memorize these examples because the examples will provide general rules about the topic. Then, when they see these examples, or things that are similar, they know how to respond. However, if an example is shown that isn’t similar to what the class was shown, then they know how to respond as well. As you go through some of the work that comes with machine learning, you will run into a variety of algorithms that fit under the umbrella of supervised machine learning. Some of the most common types that you will use though are known as random forests, decision trees, regression algorithms, and KNN. random forests, decision trees, regression algorithms, and KNN. Unsupervised Machine Learning Once you are done taking a look at some of the supervised machine learning that you are able to do, you may notice that there are a few times when this is not going to work for your needs. Supervised machine learning works in some cases, but it is not going to work as well for some of the other problems that come into play. This is where you will be able to look at unsupervised machine learning and see where this is able to fill in some of the blanks. Unsupervised learning is the type that will happen when your algorithm is able to learn either from mistakes or examples without having an associated response that goes with it. What this means is that with these algorithms, they will be in charge of figuring out and analyzing the data patterns based on the input that you give it. Now, there will also be a few different types of algorithms that can work well with unsupervised machine learning. Whichever algorithm you choose to go with, it is able to take that data and restructure it so that all the data will fall into classes. This makes it much easier for you to look over that information later. Unsupervised machine learning is often the one that you will use because it can set up the computer to do most of the work without requiring a human being there and writing out all the instructions for the computer. A good example of this is if your company wants to read through a ton of data in order to make predictions about that information. It can also be used in most search engines to give accurate results. Unsupervised machine learning is used in a lot of the different programs and projects that you want to use that come with machine learning because of all the power and more that is behind it. Some of the techniques that you can enjoy with unsupervised machine learning, and that you are most likely to use with this kind of learning include neural networks, clustering algorithms, and the Markov algorithm. Reinforcement Machine Learning And the third type of machine learning that we need to focus on is known as reinforcement machine learning. This one is going to work in a manner that seems similar to an unsupervised machine, but instead, it focuses on the idea of true and false to help it to learn how to behave. This one works a little differently true and false to help it to learn how to behave. This one works a little differently than the other two options, but this is going to make it perfect for some of the projects that you want to explore. So, whenever you decide to work with reinforcement machine learning, you are working with an option that is like trial and error. Think about when you are working with a younger child. When they do some action that you don’t approve of, you will start by telling them to stop, or you may put them in time out or do some other action to let them know that what they did is not fine. But, if that same child does something that you see as good, you will praise them and give them a ton of positive reinforcement. Through these steps, the child is learning what is acceptable behavior and what isn’t. To keep it simple, this is what reinforcement machine learning is going to be like. It works on the idea of trial and error, and it requires that the application uses an algorithm that helps it to make decisions. It is a good one to go with any time that you are working with an algorithm that should make these decisions without any mistakes and with a good outcome. Of course, it is going to take some time for your program to learn what it should do. But you can add this to the specific code that you are writing so that your computer program learns how you want it to behave. The different algorithms that you will use with reinforcement learning are not going to be as prevalent as with the other types of learning, and there are not as many of them as we talked about above. But you can still use a few different algorithms that fit under the umbrella of reinforcement learning, including SARSA and Q-learning. These are the three main types of machine learning that you are able to work with. The idea behind them is that you will be able to do a lot of different projects based on what your end result should be. Learning the different algorithms and working with them, like we will as we progress through this guidebook, can make it easier to get some of the results that you want. Chapter 3: How Does Machine Learning Compare to AI One thing that we need to spend some time working on and understanding before we move on is the difference between Artificial Intelligence and Machine learning. Machine learning is going to do a lot of different tasks when we look at the field of data science, and it also fits into the category of artificial intelligence at the same time. But we have to understand that data science is a pretty broad term, and there are going to be many concepts that will fit into it. One of these concepts that fit under the umbrella of data science is machine learning, but we will also see other terms that include big data, data mining, and artificial intelligence. Data science is a newer field that is growing more as people find more uses for computers and use these more often. Another thing that you can focus on when you bring out data science is the field of statistics, and it is going to be put together often in machine learning. You can work with the focus on classical statistics, even when you are at the higher levels, sot that the data set will always stay consistent throughout the whole thing. Of course, the different methods that you use to make this happen will depend on the type of data that is put into this and how complex the information that you are using gets as well. This brings up the question here about the differences that show up between machine learning and artificial intelligence and why they are not the same thing. There are a lot of similarities that come with these two options, but the major differences are what sets them apart, and any programmer who wants to work with machine learning has to understand some of the differences that show up. Let’s take some time here to explore the different parts of artificial intelligence and machine learning so we can see how these are the same and how they are different. What is artificial intelligence? The first thing we are going to take a look at is artificial intelligence or AI. This is a term that was first brought about by a computer scientist named John McCarthy in the 1950s. AI was first described as a method that you would use for manufactured devices to learn how to copy the capabilities of humans in regard to mental tasks. However, the term has changed a bit in modern times, but you will find that the However, the term has changed a bit in modern times, but you will find that the basic idea is the same. When you implement AI, you are enabling machines, such as computers, to operate and think just like the human brain can. This is a benefit that means that these AI devices are going to be more efficient at completing some tasks than the human brain. At first glance, this may seem like AI is the same as machine learning, but they are not exactly the same. Some people who don’t understand how these two terms work can think that they are the same, but the way that you use them in programming is going to make a big difference. How is machine learning different? Now that we have an idea of what artificial intelligence is all about, it is time to take a look at machine learning and how this is the same as artificial intelligence, and how this is different. When we look at machine learning, we are going to see that this is actually a bit newer than a few of the other options that come with data science as it is only about 20 years old. Even though it has been around for a few decades so far, it has been in the past few years that our technology and the machines that we have are finally able to catch up to this and machine learning is being used more. Machine learning is unique because it is a part of data science that is able to focus just on having the program learn from the input, as well as the data that the user gives to it. This is useful because the algorithm will be able to take that information and make some good predictions about the future. Let’s look at an example of using a search engine. For this to work, you would just need to put in a term to a search query, and then the search engine would be able to look through the information that is there to see what matches up with that and returns some results. The first few times that you do these search queries, it is likely that the results will have something of interest, but you may have to go down the page a bit in order to find the information that you want. But as you keep doing this, the computer will take that information and learn from it in order to provide you with choices that are better in the future. The first times, you may click on like the sixth result, but over time, you may click on the first or second result because the computer has learned what you find valuable. With traditional programming, this is not something that your computer can do on its own. Each person is going to do searches differently, and there are millions of pages to sort through. Plus, each person who is doing their searches millions of pages to sort through. Plus, each person who is doing their searches online will have their own preferences for what they want to show up. Conventional programming is going to run into issues when you try to do this kind of task because there are just too many variables. Machine learning has the capabilities to make it happen though. Of course, this is just one example of how you are able to use machine learning. In fact, machine learning can help you do some of these complex problems that you want the computer to solve. Sometimes, you can solve these issues with the human brain, but you will often find that machine learning is more efficient and faster than what the human brain can do. Of course, it is possible to have someone manually go through and do this for you as well, but you can imagine that this would take too much time and be an enormous undertaking. There is too much information, they may have no idea where to even get started when it comes to sorting through it, the information can confuse them, and by the time they get through it all, too much time has passed and the information, as well as the predictions that come out of it, are no longer relevant to the company at all. Machine learning changes the game because it can keep up. The algorithms that you are able to use with it are able to handle all of the work while getting the results back that you need, in almost real-time. This is one of the big reasons that businesses find that it is one of the best options to go with to help them make good and sound decisions, to help them predict the future, and it is a welcome addition to their business model. Part 2: Your Python Crash Course Chapter 4: What is Python and How to Set It Up On Your Computer? If you are looking to learn a new coding language, then look no further than the Python coding language. This is considered one of the most popular options for coding out there mainly because you are able to use it on almost any platform, and it is pretty easy for a beginner to learn how to use while adding in a ton of power to do the different programming options that you need. The nice thing about Python is that even with all of the power that comes with it, you will see that it has been designed with beginners in mind. So, if you have not been able to work with any kind of coding in the past, you will still be able to learn how to work with Python and even do some of the machine learning algorithms that we will talk about later on. Python is also known as open-source, which means you can download the coding language, along with all of the other parts that are needed without having to pay for them. Add in that there is still a dedicated group of developers who update and work to improve the program and you have one of the best programming languages out there to work with. One thing that we have to remember when it comes with Python is that even though it is free and you are able to get started with it easily, there are a few extensions and libraries that you can add to this language that is going to cost a bit. These still work well with this Python language, but because they are developed by a third-party developer, they will cost a bit. But you get to choose whether you want to use those or not and it is perfectly fine to just work with the basic and open-sourced parts of Python. As a beginner, you are going to find that working with the large library that comes with Python can make your life so much easier. You are going to enjoy that it is easy to start with and that there are a lot of functions and more found in the library. This helps you to do more with your codes, helps to keep things organized, and more. The basic library will do a lot of different things that you can do with the simple Python library, but for some of the technical things that we will do with machine learning as we move through this guidebook, you will need to download a few other libraries to go here. The two main ones we will look at include TensorFlow and Scikit-Learn, but there are other options that work well for helping you increase the capabilities of Python. This coding language is also all about the classes and the objects. We will talk about this a bit more as we progress through this book, but this really makes about this a bit more as we progress through this book, but this really makes coding easier for you. It ensures that when you call up a part of the code, as long as you name it and call it up in the right away, it is going to show up the way that you want. This may have been a struggle for some beginners in other coding languages, but this problem is solved with the help of the Python code. Python can work with other languages. Not only are you able to turn on the Python language and use all of the capabilities that come with it on your computer, but you can also combine it together with some other coding languages to really enhance some of the capabilities that you see on there. Python is able to do a lot of different things, but there are a few points where it may fall a bit short, or that other coding languages are going to do better. Adding it together with one of these other coding languages can ensure that your program is written the way that you want. Python is already being used in a lot of different programs already. In fact, some of your favorite programs may already be using Python to help them run. You will find that Python is on many website and games and other common programs, and you may not have even noticed to begin. As we go through this guidebook, you may be pleasantly surprised at how great these programs work, even though a lot of the codes are simple to read and write. Before we go any further with Python, we need to take some time to learn how to set up the Python program on the different operating systems that you are going to use. Python is going to work with any operating system that you want to work with including Windows, Mac OS X, and Linux so you will be able to download it and get it to work based on whichever is your preference. To start, you are able to download the Python program from a few different sources. But the method that is the easiest is going to be . This one is set up to have all of the files that are already needed to get the code to work right away after the download and will make sure that you get all of the files, the interpreter, the IDLE, and the compiler that is needed. You can also choose to download from another location if that works the best for you, but you should check to see which files are included and if you need to go through and download some more files. So, let’s go through and look at some of the different steps that you need to take in order to download Python onto the various operating systems that you want to use. First, we are going to look at how to get Python set up on a Windows operating system. This is a popular operating system that programmers are going operating system. This is a popular operating system that programmers are going to work on, but since Windows has its own coding language available, you will have to take the manual method in order to install Python on the system. The good news is that this really only takes a few steps, and it is pretty easy to work with. It won’t take long before you are able to get the Python program on your computer and you can see it working in no time. Once the Python program and all of the files are put on it are set up on a Windows operating system, there won’t be any problems that you need to worry about. It is not going to interfere with anything on the system, and the Windows coding language isn’t going to cause problems either. Once you are ready to install the Python language so it is ready to work on your computer, and with the Windows operating system, you will first need to make sure that the right environment and variables are in place to ensure that you are able to run the scripts for Python from the command prompt that is there. The other steps that are needed to get the Python language, and all of its files, set up on a Windows operating system includes: 1. The first step that we need to do here is to head on over to the download page for Python and grab the installer that is listed under the Windows operating system. You are able to pick out the version of Python that is the best for you, but many programmers choose to download the version that is newest at the time. You also need to decide if you want the 32-bit or 64-bit version of Python based on the operating system type that you are working with. 2. Once you have been able to grab the installer from Windows for Python, it is time for you to click on it so that you can do the Run as Administrator. As you go through and do this, the system is going to provide you with two options to pick out from, and you can pick the one that works for you. For this, click on “Customize Installation.” 3. The next screen that comes up is going to have a lot of boxes to check on. You need to make sure that you select all of the ones that fall under Optional Features and then click to go to the next page. 4. While you are still here, you can also check out the location where you want to install this Python. Once that folder is then picked out, and you can click to install. This is going to take a bit of time to get the installation so have some patience with it. Once that install is all done, you can then close out of this part. 5. The next thing that we need to do is set up the PATH variable that works with this system so that you have all of the directories that will include packages and other components that are necessary to use later on. The way that you get all of this set up is going to use the steps below; a. Open up your Control Panel. If you are not certain where this is, click on your taskbar and type in “Control Panel”. Click on the little icon that shows up when you do this. b. When you get the Control Panel to show up, you can search for “Environment” and then click on Edit the System Environment Variables. When this is done, you can then click on the button labeled “Environment Variables.” c. At this point, you can go to the section that is listed for User Variables. Here you can either decide to create a new PATH variable, or you can edit the PATH variable that is already in place. d. If there isn’t a variable for PATH on the system as you are looking, then it is time for you to create your own. To do this, click on New. Give it a name, one that works for the PATH variable you are choosing, and then place it into the chosen directory. Click to closer yourself from the Control Panel at this time and then go to the next step. 6. When you get to this point, you can open up that Command Prompt again. You can do this by clicking on your Start Menu, then clicking on Windows System, and finally on Command Prompt. Type in the word “python”. This will be enough to load up the interpreter of Python for you. Once the steps above are done, you can then go back to our system and open up the Python language. You will then be able to use it in any manner and work with some of the codings that we will do in this guidebook. It takes a few minutes to go through the steps above, but you will find that this is one of the best ways to get it set up and it only takes a few minutes to get it all done. The next thing we need to look at is how to download the Python files on a Linux operating system. This one is also going to work well with Python, and since there are a lot of people who are using this operating system, it is a good way to learn how to code in Python as well. Now, the first step that we need to take here is to see which version of Python 3 Now, the first step that we need to take here is to see which version of Python 3 is available on our system. To do this, you can just open up a command prompt on Linux and then go with the code below: $ python3 - - version If you are on using a version of Ubuntu that is a bit newer, then it is a simple process to install Python 3.6. you just need to use the commands below: $ sudo apt-get update $ sudo apt-get install Python3.6 If you are relying on an older version of Ubuntu or another version, then you may want to work with the deadsnakes PPA, or another tool, to help you download the Python 3.6 version. The code that you need to do this includes: $ sudo apt-get install software-properties-common $ sudo add-apt repository ppa:deadsnakes/ppa # suoda apt-get update $ sudo apt-get install python3.6 The nice thing about working with this one is that if you do choose to work with the variety of distributions that come with Linux, you can also download the Python 3 program on it to help you out. You can go through these seems steps no matter which distribution you choose to work with. You can also stick with the steps above to install any version of Python onto your system that you would like, so if you want to go with an older version, such as Python 2, that is easy to work with as well. Now, we can move on here and look at how to get this language into an Apple computer and on Mac OS X. This system is set up to work well with Python, and it is going to already have Python 2 programmed on it. You can go through and double-check to see if this version is present on your system or not. You will find that Python 2 is going to work just fine for a lot of the programming that you want to do with Python so if you want to make things easier, you can go through and just work with this. However, it is common that a lot of programmers want to work with Python 3, or one of the newer versions that come with Python, and they want to update this. This is pretty easy to work with. The first step to take here is to uninstall the Python 2 version so that you won’t end up with some problems with two versions of this coding language on your computer. Then you can go to in order to pick out the exact version of Python that you want to be able to add to the computer. Being able to run both the shell and the IDLE with the Python language is going to depend on which version of the program you decide to work with, as well as what preferences are there when you write out the code. The two biggest commands that you are going to use the most often to help make sure that the shell and IDLE applications start-up when you want will vary based on the version you use, and they are: For Python 2.X just type in “Idle” For Python 3.X, just type in “idle3” As we talked about a bit before, when you take the time to download and install this Python 3 on the Mac operating system, you will need to install the IDLE so make sure that is there, and you can install it as a standard application inside of your Applications folder of course. To help you to start up this program using your desktop, you just need to go into the folder, double click on the application for the IDLE, and then you can wait for it to download. And that is as simple as it is! If you are able to follow these steps, you will be able to get the Python code on your system, and it is going to be ready to work for you and write some of the codes that we will discuss in this section, as well as in some of the other sections as we move into machine learning as well. Chapter 5: Some of the Basic Parts of Your Code With the download of the Python language done and taken care of, it is time to move on to some of the basic parts that come with writing code in Python, along with some of the different benefits that come with the different parts so we have a better idea of how amazing working with Python can be for us. We will start out this section looking at some of the parts that you are most likely to see when you work on a Python code and can make it easier to move on to some of the more complicated things that we do in the following chapters, especially when it comes to machine learning. You will find that when it comes to using the Python code, there are a lot of different things that you are able to do. And the work that you decide to put into your code often is only limited by the kind of program that you would like to write out. making sure that you have some of the basics down, and gaining a good understanding of how this all works can help out later when you work on machine learning and some of the more complicated codes that you wish to do later on. The keywords The first part of the Python code that we need to pay attention to is the keywords. Any coding language that you choose to go with is going to have these keywords, and they are seen as important and reserved because they help tell the compiler the right actions to take to complete the code. These are special because they are a command for the compiler to follow. If you place them in the wrong part of the code or use them in the wrong way, then the code is not going to provide you with the results that you would like. As you are taking a look at some of the keywords that come with Python, it is important that you learn how to use them in the proper manner. You do not want to make a mistake of adding them to the wrong part of the code. Doing this can lead to a lot of error messages that you have to then try to sort through. As you start working with the code a bit more, you will start to see what we mean by keywords a bit more and where you can use them to get the compiler to act in the manner that you want. How to name an identifier The next topic that is important to look at when working in the Python language is how to name your identifiers. If you want these to work well and the program is how to name your identifiers. If you want these to work well and the program to behave, then you need to make sure that the proper naming method is used each time. Using the naming process the wrong way can end up with frustrations and a lot of rewriting of the codes. So, with this in mind, let’s take a look at the proper steps that you have to take in order to get the identifiers named in the right way. There are actually quite a few different types of identifiers that you can find in the Python language, but they are going to come in different names. You may find them called things like classes, functions, entities, and variables. Any time that you name one of these identifiers, even when they are under different names, the rules are going to be the same so you won’t have to change up how you do it each time. this can make life a bit easier. This brings us to the idea of naming the identifiers and learning which rules you have to follow to make this happen. First, you need to take some caution concerning the name that you give to the identifier. There are a ton of names that are available, and you can choose, for the most part, the name that you want. You get the choice of working with letters, both the uppercase and the lower case, and any number. The underscore symbol and any combination of the previous will work as well. But there are a few restrictions to keep in mind with this as well when you start naming your identifier. First, it is not allowed for you to name any identifier with a number, and the name should not have any spaces that come with it. Naming the identifier something like 5kids or 5 kids would get you an error, but naming it fivekids or five_kids would be just fine. And keep in mind that you should never use a keyword as the name of one of your identifiers or the compiler is going to get confused. When you come up with the name that you want to give to that identifier, make sure that you remember what it is. It may follow all of the rules that you need, but if you are not able to remember the name when it is time to execute the code or pull out that identifier later on, then there can be some issues. If you call it the wrong thing or you spell it differently, then there could be an error or the compiler is going to get confused. With the rules above in mind, if you are able to pick out a name to go with your identifier, and you make sure that it actually fits in with the work that you are doing for that part of the code, and you follow the few rules that are above, then naming them will be easy and you won’t run into any problems with it. naming them will be easy and you won’t run into any problems with it. The Statements in Python Another topic that we need to focus on for a moment is the statements that come in the Python language. Statements are simple because they are just some sentences that we are able to tell the compiler to add to the screen for others to read. They are simply a string of code that you are first able to write out, and then the compiler will take it and list it out on the screen based on what kind of code that you have. When you are working with the various statements that are available on Python, as long as they are written out in full sentences and in the right spot of the code, the compiler will have no problem reading through them, and the message you would like will show up on the screen. You can choose to have the statements at any size that you would like as long as it makes sense for the code that you are writing out. Comments We can’t finish going through the basics of a Python code without looking at what the comments are all about. You can definitely write out any code that you want without having any comments, but these are useful for helping us to understand how to work with different parts of the code and can explain what is going on in a certain part of the code, and even to leave a little message to someone who is looking it over without interrupting the code at all. These comments are going to be really helpful in many cases because you can add them in to make it easier for a programmer or someone else reading through the code to have a better idea of what is going on in that code. But it is not going to make any changes in how the code performs. The comments are going to keep things organized, such as naming a certain part of the code so that it works more efficiently, can help explain what is going on in one part of the code compared to another, and can ensure that everyone is on the same page, without causing the code to pause or have errors. Creating one of these comments can be pretty simple in the Python code. And you can add in as many of these as you would like. It is often recommended that you keep these to a minimum so that they don’t mess up the code or make it look too convoluted. In order to make up some of your own comments in Python because you just need to work with the # sign in front of the comment that you want to write out. The comment can be as long as you want, and you can add in want to write out. The comment can be as long as you want, and you can add in as many as you would like as long as the # sign is in front of it, so the compiler knows not to use that part. Bringing out the variables It is also important to spend some time looking at the different variables that are present in the code, and see how they work to add something important to the code as well. These are going to be more common in the code than a lot of beginners may think and the main reason that we need to focus on these variables is that they can help to store up some of the different values that you try to place in the code. This is one of the best ways to make sure any line of code that you try to write is going to be easy to read, will stay organized, and will execute in any manner that you would like. One of the things that you are going to like the best with these variables is that even with all of the work that they provide, they are still going to be easy to work with. All that you need to do to ensure that a value is assigned correctly to a variable is to put in the equal sign right in between the variable and the value. With that sign in place, the compiler will take on the rest of the work for you. You can choose to add in any kind of variable that you want in here, just double check that you have the equal sign in place first. Another option to work with is to assign more than one value to the same variable at a time. If you just make sure that there is an equal sign that is linking both of these values back to the variable, then the compiler is going to know exactly what it should do. There are a lot of examples of how this can work in a Python code, and it is simple to do, so just make sure that the right value is hooked up with the right variable, and you should be ready to go. The Operators And the last thing that we are going to take a look at in this chapter is the idea of the operators. These are another small, and often easy, part of the code that can make a big difference in how the code runs. There are many types of operators that you are able to work with including those that will assign names to the identifiers you are using, ones that can help with simple mathematics, some that compare two statements to see if they are the same or different, and more. These operators help the compiler know exactly what you would like it to do depending on the part of code you are at. As you take a look through some of the different codes that we will work on in this guidebook, a lot of different operators are going to show up. And often you will use them without even realizing what you are doing at the time or realizing that you are working with the operators. But it would be almost impossible to do any kind of coding if you were not able to add in the operators along with some of the other parts. These are just a few of the basic parts that you are able to work on when it comes to creating your own codes in Python. These may seem simple, and you may wonder why you would need to use these in the first place, but there are so many times that you are going to see these basics show up in the code that you are trying to write, and having a strong working knowledge of them can make coding in Python so much easier. Chapter 6: How to Write Your Own Conditional Statements Now that we have some idea of the basics that come in a Python code, it is time to learn a few of the different tricks that you are able to do with writing your own codes as well. And the first place we are going to start is with the conditional statements. As you are writing codes, you may wish that you could, at times, set up a program and get it to behave in the manner that you would like all of the time. it would be nice to, ahead of time, guess each and every answer that the user is going to provide to the program, but of course, we know that this is impossible. Let’s say that you are working on a code where you want the program to ask the user what their favorite color is. It would take forever, and be a waste of time, to go through and write out each and every color that is available throughout the world, and it is highly likely that you would still miss some. The code would be a mess, and you would probably cut your losses and never want to code again. This is where the conditional statements can come into the game. When you want to be able to write out some code that has the capability to make some decisions for you without you being there, based on certain conditions that you are able to set up ahead of time, then this is when you bring out the decision control statements or the conditional statements. These can be helpful any time that you are allowing the user to put in an answer on their own, rather than having a menu of listing available for them. This helps the program know what steps it needs to take based on the conditions that are set, and the answers that the user provides. Before we get too far into this, we have to look at the fact that there are three types of conditional statements that you are able to work with. The three conditional statements that we can use in Python include the elif statement, then the if else statement, and also the most basic kind known as the if statement. Let’s take some time to explore each of these and see how they work in helping to make your code perform in the proper manner. The first conditional statement that we are going to look at is the if statements. This is a simple example of a conditional statement, and many programmers want to focus more on the if else statements. But learning this one can provide us with some of the foundations that come with these kinds of statements so we can use these properly. use these properly. When we are using the if statement, we will find that it relies on the idea that the answer we are given from the user is either going to be true or false based on the conditions that you set ahead of time. if the answer from the user does match up with your chosen conditions, it is true. If it doesn’t, then it is false. Of course, this isn’t to say that the user is always giving the wrong answer when it is false; it just means that they are not giving the answer that meets with the conditions. If the user does add in an answer that is seen as true and the computer sees this, then the next step is that they are going to get the information that you were able to add to the code. But if the answer is seen by the compiler and it is seen as false, then the program will just end because it really has no idea of what it is supposed to do next. To get a better idea of how the if statement is going to look and how it is able to work inside of your code, take a look at an example of the if statement that we have below: age = int(input(“Enter your age:”)) if (age > Traceback (most recent call last): File “D: \Python34\tt.py”, line 3, in result = x/y ZeroDivisionError: division by zero ZeroDivisionError: division by zero >>> When you take a look at this example, your compiler is going to bring up an error, simply because you or the user is trying to divide by zero. This is not allowed with the Python code so it will raise up that error. Now, if you leave it this way and you run the program exactly how it is, you are going to get a messy error message showing up, something that your user probably won’t be able to understand. It makes the code hard to understand, and no one will know what to do next. A better idea is to look at some of the different options that you can add to your code to help prevent some of the mess from before. You want to make sure that the user understands why this exception is being raised, rather than leaving them confused in the process. A different way that you can write out this code to make sure that everyone is on the same page includes: x = 10 y = 0 result = 0 try: result = x/y print(result) except ZeroDivisionError: print(“You are trying to divide by zero.”) As we can see when we work with this kind of code is that when we put it into the Python compiler, it is going to be similar to the first example that we see with these exceptions, and it is going to work in pretty much the same way. the difference that we are going to see is that we changed up the information at the way end, and we made sure that the message that shows up here is something that the user is going to see when the user does raise this exception. Instead of the user working with the exception and getting a long error code that they will not understand at all and can just make them confused, they will get a message that makes more sense. In the example that we did above, they are going to get the message that says “You are trying to divide by zero” show up on the screen. Now, you don’t have to go through and do this kind of step if you don’t want too, but it can definitely make your code more user-friendly overall. How to define your own exceptions in the code How to define your own exceptions in the code As we went through the examples of coding for exceptions above, we were just spending our time looking at the exceptions that are automatically recognized by the Python library and compiler to start with. This is going to help us out because we can make the program more user-friendly and enjoy that the messages we provide are more personalized, rather than seeing a string of words and a code that no one else is able to understand. This is just one level of working with the exceptions though. It is possible to take this to the next level and create some of our own exceptions. The type of exception that you will want to create will depend on the kind of program that you wish to work on, but there are so many ways that you are able to use this to your advantage, and the limits are just going to be your own imagination along the way. For example, you may be working on a code and decide that the users should only be able to input certain numbers, and then the others are not allowed. This may work when you are creating a game for others to play. Or you could have an exception come up if you only want to let the user try to answer three times. Once the user has gone through all the guesses that they are allowed, the compiler will then raise one of these exceptions to tell the user they won’t be allowed to guess again. These exceptions are unique just to your code, and if you don’t write them into the code, the compiler will just keep going, without recognizing that it is supposed to stop. You can add in any kind of exception that you want to this message, using a similar idea that we went with before. The code that you can do to make this happen includes: class CustomException(Exception): def_init_(self, value): self.parameter = value def_str_(self): return repr(self.parameter) try: raise CustomException(“This is a CustomError!”) except CustomException as ex: print(“Caught:”, ex.parameter) When you finish this particular code, you are done successfully adding in your When you finish this particular code, you are done successfully adding in your own exception. When someone does raise this exception, the message “Caught: This is a CustomError!” will come up on the screen. You can always change the message to show whatever you would like, but this was there as a placeholder to show what we are doing. Take a moment here to add this to the compiler and see what happens. There are many times when you are coding when you want to be able to handle and manage the exceptions that come up in the code. And the more coding that you do, especially when you do some of the more advanced codings that are in this guidebook. Make sure to take some time on a few of these codes above and practice writing them in your compiler to get a better feel for how they work, and to really get a good understanding for handling and raising exceptions in Python. Part 3: What is Scikit-Learn? Chapter 9: What is Scikit-Learn? Now, it is time to get into some of the fun stuff that we are able to do when it comes to Python machine learning. Now that we have some of the basics down and we understand some of the topics above, it is time for us to look at our very first Python library that we can use as we go through some of the machine learning algorithms in the following chapters. But first, we need to take a look at what Scikit-Learn is all about so we know how we can use it later. There are actually quite a few things that you are going to enjoy when it comes to the Scikit-Learn environment and the library that it brings to Python. This is often one of the first extensions of Python that programmers are going to add in, especially when they want to explore machine learning because it has a lot of the tools and other parts that can help out with these machine learning programming. Any time that you want to work on machine learning of any kind with Python, you will find that it is imperative that you have a good understanding of this library. To start, we need to take a look at some of the background that comes with the Scikit-Learn library. This particular library for Python was developed in 200. Later on, the company started to grow a lot more and made some important changes. In fact, it has grown so much that right now, it has 30 contributors who are active, and a few paid sponsorships that help it to keep going, including ones from Google, the Python Software Foundation, and INRIA. This is good news for anyone who would like to program with the language because it means that while the library is still free to use, there are still developers working on maintaining it and improving it all the time. With that in mind, it is likely that you have at least a few questions about this library and what you are able to do with it. First, this library is going to make it easier for a programmer to do machine learning because a ton of the supervised and unsupervised algorithms are inside this library. Plus, the algorithms have had the necessary adjustments done on them to help make sure they work in the Python environment, making it easier to do Python machine learning along the way. With Scikit-Learn, the licensing comes under a permissive simplified BSD license, and many of the distributions of Linux are going to be able to use it in the process if you want to as well. It is also going to be built up thanks to the help of the SciPy library, so this helps out as well when it comes to ease of use. help of the SciPy library, so this helps out as well when it comes to ease of use. The stack of resources that you are able to find with this one is larger and really goes to supporting machine learning in the process. The options you have for what is found in the Scikit-Learn library includes: 1. NumPy: This is a good one to use because it allows you to work on the n-dimensional array package 2. SciPy: This one is going to be a fundamental kind of library that you would use if you wish to do computations in the scientific field 3. Matplotlib: This is a good library to use because it is going to help you do some plotting, whether that plotting is in 2D or 3D. 4. iPython: This is a good library to use because it is going to allow you a console that is more enhanced and interactive than others. 5. Sympy: This is a library that works well if you want to do some things in symbolic mathematics. 6. Pandas: This is the number one part that you need to use because it is going to include all of the analysis and the data structure that is needed to make machine learning successful. There are a few different modules and extensions that you are going to be able to use with SciPy, but they are all going to come together and be known as SciKits. This is why the module that is going to provide is with all of the machine learning algorithms that we are going to be using here are called the Scikit-Learn library. And each of them can come together to help us really get a lot out of the machine learning that we want to do. This library is designed to make machine learning with Python as easy as possible. There are already a ton of different algorithms that come with this one, and they can easily be used inside of the Python library, making your job a lot easier in the process. Now let’s take a look at some of the different algorithms that you are able to use when it comes to the Scikit-Learn library and why this is actually able to help you to work with machine learning a bit better. Chapter 10: Supervised Learning with Scikit-Learn Now that we have had a chance to look at Scikit-Learn and some of the neat things that this library can provide to us, it is time to take a look at some of the different things that you can do with the algorithms that are supervised machine learning with this library. There are a lot of different types of learning algorithms that you are able to use with Scikit-Learn, and these can help you get a good start with machine learning. Some of the different types of machine learning that you are able to do with the Scikit-Learn library includes: Support Vector Machines or SVMs The first type of supervised learning that we are going to look at with ScikitLearn is known as support vector machines or SVMs. These are going to be a set of learning methods that are supervised and can be used with detecting outliers, regression problems, and classification problems. There are a lot of different reasons that we would want to work with support vector machines, including: 1. They are going to be effective when you have spaces that are high dimensional. 2. They can still be effective when you have more dimensions than you do a number of samples to work with. 3. The SVM is going to work with a few training points to make any decision you need, which is going to be known as the support vectors, allowing the machine to be as efficient with its memory as possible. 4. There is a lot of versatility. There are functions of kernels that you are able to specify for the decision function. There are some of the common kernels that can be provided; however, the programmer is going to have the ability to create their own custom kernels in the process. The SVMs are not going to work for every machine learning that you decide to do, though, despite how great they are. There are a few disadvantages that come with using the SVMs, and one of these is that if you have the features that you have to be higher in number than how many samples you have, you have to make sure that you do not overfit by choosing Kernel functions and regularization term is crucial. In addition, you will find that the SVMs, though you can do a lot with them, can’t give you any kind of estimate when you are looking for probability. Instead, you can calculate the estimates with the help of looking for probability. Instead, you can calculate the estimates with the help of expensive five-fold cross-validation. The cost of doing this can often get much higher than it is worth and sometimes a company will choose to go with a different kind of algorithm to get their work done. With that in mind, we need to take a look at how we are going to be able to work with a classification problem that works with the support vector machine. For this one, you need to make sure that you have Scikit-Learn, Matplotlib, Pandas, and NumPy ready to go. The first step we have to do when you have all of this set up is to create a new data set to work with. The code that you can do with this one includes: # importing scikit learn with make_blobs from sklearn.datasets.samples_generator import make_blobs # creating datasets X containing n_samples # Y containing two classes X, Y = make_blobs(n_samples = 500, centers = 2, random_state = 0, cluster_std = 0.40) # plotting scatters plt.scatter(X[:, 0], X[:, 1], c = Y, s = 50, cmap = 'spring'); plt.show() What the SVM does is to help you to go with the classes that are created and draws a line here. But you will also notice that it is going to take this a bit further and consider how wide the line will be in this region between the classes. To see how this works, you can try adding the following code to your compiler to see what happens: # creating line space between -1 to 3.5 xfit = np.linspace(-1, 3.5) # plotting scatter plt.scatter(X[:, 0], X[:, 1], c=Y, s=50, cmap='spring') # plot a line between the different sets of data); plt.show() When all of this is in place, we are ready to start importing the data set that we want to use. This is going to be considered one of the intuition parts that come with SVM, and it is done in order to make sure that we are optimizing the linear discriminant model that represents the perpendicular distance that is present in the graph between the sets of data. With this in mind, we want to be able to take the training data that we have worked on and use it to train our classifier. Before we train this all though and get it to work the way that we want, we first need to make sure that we have imported the cancer datasets, and that we have done this in a way that gives us acsv file that we can use. We will use this kind of file in order to make sure we can train the right features that we need. The code that is needed to make this happens is below: # importing required libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt # reading csv file and extracting class column to y. x = pd.read_csv("C:\...\cancer.csv") a = np.array(x) y = a[:,30] # classes having 0 and 1 # extracting two features x = np.column_stack((x.malignant,x.benign)) x.shape # 569 samples and 2 features print (x),(y) We are getting close to done with this one here, but first, we need to take some time to fit our SVM properly. We want to make sure that the classifier that we are using is able to fit with these points. While some people may find that working with the likelihood models and the math that goes with them, we will working with the likelihood models and the math that goes with them, we will leave those for later. Instead, our goal is to use the scikit-learn library in order to help us get the algorithm, to help by making it into a black box to handle the tasks above. The code that is needed for this, and the part that we will need to use to allow the model to predict any new values when the fitting is done includes: # import support vector classifier from sklearn.svm import SVC # "Support Vector Classifier" clf = SVC(kernel='linear') # fitting x samples and y classes clf.fit(x, y) clf.predict([[120, 990]]) clf.predict([[85, 550]]) Neural Networks Another option that we can work with when it comes to supervised machine learning Scikit-Learn is the neural networks. These are going to be a bit different than the other kinds, but because they work similar to the human mind, it is amazing all of the different things that we are going to see them do, and how strong the learning can be. The neural networks are going to be used in many applications of machine learning because they are great at analyzing and learning the different patterns that you have, simply by going through several layers at the time. The more layers that your neural network is able to go through, the more successful it is going to be with correctly analyzing and learning what the image is. As it goes through the different layers, each time that the neural network starts to see a new pattern in that layer, it can then automatically activate the process to push it into the next layer to look again. You will see that this process will continue going until all of the layers are complete, and then the algorithm will use the information it has in order to predict what it sees inside this image. Now, we are going to see a few different actions happen at this point. If the algorithm was able to continue through all of these layers, and then make an accurate prediction when it is done, then the neurons are going to be stronger. This is a good thing because it provides a great and strong association between the patterns and the object. And the stronger these neurons get the easier it is for the patterns and the object. And the stronger these neurons get the easier it is for the algorithm and the system to do this same process the next time that it is presented with it. To get this algorithm to work the way that you would like, you need to be able to provide your system with some kind of image, in this case, a car. The neural network would take some time to search over the picture, starting along the first layer to see what is there, such as the outside edges of the car. Then it would move on from there to some of the other layers to see if there are other characteristics that should be noted in the picture as it goes through the different layers. If it is successful, the program will go through all of the different layers until it is able to find the little details that say the picture is a car. It is possible that the neural network is going to find a ton of layers when it comes to working with this one. And this is a good thing. The more layers that the neural network is able to figure out, the more accurate it is going to be when it comes to making predictions. And when the neural network is successful, it is going to be able to remember a lot of the patterns that it sees and will store this knowledge to use next time. This is one of the options that you would use when you are looking to do some different processes like recognizing the animal in a picture, how to define a car model, facial recognition, and more. Anything that requires the system to recognize what is found in a picture and an image and then brings back the results that you are looking for. One advantage that a programmer may see when they are doing a neural network algorithm is that you won’t have all of the statistical training to go with it to make it easier. Even without having to use all of the statistics, you can use the neural network to help you to find out what relationships are there between your different variables, even when the problem is nonlinear. There are a few negatives that come with this, though. The biggest issue that comes with the neural network algorithm is that the computational cost is going to be pretty high. This makes it so that while you can get a lot of the results that you want with machine learning, the costs can be so high that it is hard to bring out these neural network algorithms as often as you would like. Decision Trees Another option that the programmer is able to use when they are doing things Another option that the programmer is able to use when they are doing things with supervised learning with the help of this library is going to be the decision trees. These are going to be helpful when it comes to regress and classification. The goal with this kind of algorithm is to be able to create some kind of model that is able to go through and predict the value of a target variable, simply by being able to learn the decision rules that it can infer from the features of your data. There are a lot of different advantages that come with being able to use these decision trees and the algorithms that come with it. Some of these advantages are going to include: 1. The decision trees are really simple to understand, and they are easy for even beginners to interpret. The trees are a model that can be visualized as well. 2. There isn’t a big need to spend a lot of time preparing the data. But it does struggle if there are missing values, so you need to at least check that this is not a problem. 3. The cost of using one of these trees is going to be logarithmic when we are talking about how many different points of data we need to use to make sure our tree is trained in the proper manner. 4. It has the ability to handle categorical and numerical data. Some of the other techniques that you can use, and the other algorithms out there are only going to be able to handle one variable type. 5. It is possible to validate what is going on in this model with some statistical tests. This helps us to make sure that the model is as reliable as possible. 6. It is going to perform well even if you find that some of the assumptions were violated, or we have inconsistent data when we generated that tree. While there are some benefits that come with using this kind of algorithm, there are also some negatives that we have to watch out for before we choose to go with this kind of machine learning algorithm. For example, sometimes, the learners of decision trees are going to create trees that are too complex and are not able to generalize the data they have all that well. This is a process known as overfitting and can make it hard to get the right data that you want. In addition, these data trees are sometimes seen as unstable because even a small variation in the data could give you a completely different tree than what you had before. And it is possible for a decision tree learning to make a tree that is biased if there are a few classes that dominate. It is, therefore, best to balance out your set of are a few classes that dominate. It is, therefore, best to balance out your set of data before you decide to fit the decision tree. The reason that some companies like to work with decision trees is that it allows them to take a look at all of the different possibilities that you are able to see all of the different decisions and the possible outcomes that are going to come with these. This allows you to make the best decisions for your business based on the data that you have. In addition to working with this kind of library, you will need to install the parts known as graphviz and pydotplus. You are able to install these with your pip and your package manager. Graphviz is going to be a helpful tool to use here because it helps us to draw the graphics that we want with dot files and Pydotplus is going to be the module to help us do this. Once both of those are installed, we are going to start out this work when we define the code and then collect up any of the data that is needed. Our goal is to make up a decision tree that will determine if someone is a woman or a man based on the inputs. There are three inputs that we will focus on here, including the length of their hair, how tall they are, and what pitch their voice is. First, we want to train out the data, and the code that we need to make this happen includes the following: import pydotplus from sklearn.datasets import load_iris from sklearn import tree import collections # Data Collection X = [ [180, 15,0], [177, 42,0], [136, 35,1], [174, 65,0], [141, 28,1]] Y = ['man', 'woman', 'woman', 'man', 'woman'] data_feature_names = [ 'height', 'hair length', 'voice pitch' ] After this is in your compiler and ready to go, we then want to take some time to train our classifier, which is going to be the decision tree, with the data that we trained. Training may take a bit of time, but remember that it is a necessary part trained. Training may take a bit of time, but remember that it is a necessary part of every supervised learning algorithm, so we have to take the time to do it. The code that we need to make this happen includes: # Training clf = tree.DecisionTreeClassifier() clf = clf.fit(X,Y) From here, we need to focus on the visualization of the decision tree. The best code to use for this to make sure that we can visualize the tree would be below: # Visualize data dot_data = tree.export_graphviz(clf, feature_names=data_feature_names, out_file=None, filled=True, rounded=True) graph = pydotplus.graph_from_dot_data(dot_data) colors = ('turquoise', 'orange') edges = collections.defaultdict(list) for edge in graph.get_edge_list(): edges[edge.get_source()].append(int(edge.get_destination())) for edge in edges: edges[edge].sort() for i in range(2): dest = graph.get_node(str(edges[edge][i]))[0] dest.set_fillcolor(colors[i]) graph.write_png('tree.png') These steps are going to help save the visualization that we want under an image format that is known as tree.png, so you are able to find it later. You can then go through and have the compiler execute the code and see what shows up. If you got a decision tree to show up on the screen, then this is a good sign that you have completed the code properly! Naïve Bayes It is also possible to work with an algorithm that is known as the Naïve Bayes It is also possible to work with an algorithm that is known as the Naïve Bayes algorithm. This is going to be a unique one that can simplify what you want to do with some of the codings you have, and it makes it easier to even explain some of the more complex models and things that you want to do in machine learning, even to some people who may not understand how these work as much. Here we are going to look at how to work on a new classification problem, and your goal is to come up with a new hypothesis and a design that comes with this. Then there is going to be a time when your stakeholders of the company will want to see the model that you are trying to produce with this information. Often, these stakeholders will want to see the information and learn about it, long before the information is even done. This can present a dilemma because you want to be able to show them what your plans are and your goals are, without having a finished product and without having to spend a lot of time and effort on this while everyone else is confused. When you are first forming your hypothesis, it is likely that you will run into many different points of data that we want to get to work on our model. And then you are also going to have a lot of other variables and things show up in the different training that you do. With all of this information, and the different types of data, how are you supposed to be able to show off this information to people who may not really understand what is going on? This is where the Naïve Bayes algorithm is going to come into play. It is going to be one of the best ways for you to showcase the model that you are working on, and even do a bit of demonstration of how it works, even when you see that it is at the earliest stages of being developed. As you get more familiar with this algorithm, you will find that there are a lot of reasons to use it. The Naïve Bayes’ model is easy to use and is effective at predicting the class of your test data sets, so it is the perfect choice for someone who wants to keep things simple or who is new to the whole process. Even though this algorithm is simple, it will perform well, and it has proven that it can do better than some of the other higher-class algorithms in some cases. You do need to be careful with this one though because there are some negatives to using the Naïve Bayes’ algorithm. First, when you are working with categorical variables, and you need to test data that hasn’t been through the training data set, you will find that this model is not able to make a good prediction for you and will assign those data sets a 0 probability. You can add some other methods that will help to solve this issue, such as the Laplace some other methods that will help to solve this issue, such as the Laplace estimation, but it can be confusing for someone who is brand new to working in machine learning. There are a lot of times when you will need to show the stakeholders your project, even when it is not done. And we can imagine how complicated having a project that is the beginning of a machine learning project is to describe to those who are not using it. The Naïve Bayes is a good algorithm to help us get started, can help the stakeholders see what is going on with the process, and can make it so much easier to get their questions answered, and can make it easier to continue on with the project that you are doing. Working with supervised learning can open up a lot of doors when it comes to all of the different things that you are able to do with machine learning. We will look at unsupervised machine learning in the next chapter, but as you can see, there are already a lot of different options and algorithms that you are able to work with here. Take some time to try out a few of the codes above so you can see how they work, and get familiar with them for your machine learning needs. Chapter 11: Unsupervised Machine Learning with Scikit-Learn You can also work with some of the different unsupervised machine learning algorithms with the help of the Sciit-learn library. There are a lot of these algorithms that you are able to explore with the help of this Python library, but some of the most common ones that programmers like to focus on include: K-Means Clustering and Other Clustering Options There are a lot of different clustering algorithms that work well in the unsupervised machine learning that you want to learn how to use. Unsupervised learning is going to often come with data that is not labeled, and doing this clustering is going to help us to get it all organized and easier to read. And when we are looking at some of the different clustering algorithms that you can use, the K-Means clustering algorithm is the one that will show up the most. The goal of working with this algorithm, compared to some of the others, is to help us to look through our data and find clusters to see where they land and what they are trying to tell us. First, we need to explore what the K-means algorithm is all about and why we would want to use it. This is basically a simple unsupervised machine learning algorithm that is going to work by clustering the data into K number of clusters. You are able to choose how many different clusters that you will want to use so that the algorithm knows what it is supposed to do and how to move each of the points that come in your data. There are a lot of different times when you are able to use this particular algorithm for your own needs. Some of the examples would include anomaly detection, species clustering, clustering languages, news article clustering, clustering gene segmentation data, and image segmentation to name a few of the applications. If you are interested in working with the K-Means algorithm, there are a few different codes and steps that you are able to use in order to implement this in Python. The set of data that we will choose to use is going to include 3000 entries along with 3 different clusters. This helps us to already have an idea of what our K value is going to be. So, the first step that we are going to take a look at here is going to be importing the right set of data using the code below: Now, this may seem like a lot of code to write, but it is going to show us how to write out a code for the K-means in Python and get it to work. After you have had some time to type it out and then execute to see how it is going to work. You should end up with a cluster with three different clusters where most of your points are going to fit. There will be some outliers that seem to be away from the center that you have, but there are three distinct groups that will hold onto the majority of the points that are in your set of data. Gaussian Mixture Models The next thing that we are going to look at is the Gaussian Mixture Model. This one is going to follow the idea of clustering and will be similar to what we are going to find with the K-Means algorithm that we talked about before. This can make things easier for us and can show us just how great clustering can be when it comes to making predictions for a company to do better. In real life, there are going to be a lot of different sets of data that can then be In real life, there are going to be a lot of different sets of data that can then be modeled using what is known as Gaussian Distribution. It is a very intuitive and natural method that assumes that the clusters come from various Distributions that are seen as Gaussian. Or, we will see that this distribution has gone through the model and wants to be able to turn it into not just one type of Gaussian Distribution, but several. This is going to be one of the core ideas that we are going to see with this model. The best way to see how the Gaussian Mixture Model is going to work, we need to take a look at an example and some coding to help it make more sense. In the example that we are going to use here, we are looking at a set of data from the IRS. Inside of Python, we are able to implement GMM using the GaussianMixture class to make things easier. One thing to note before we get started is that with this code, it may be difficult to run using an online computer. This is why an offline IDE is often recommended. If you do try it with your online computer, then it is possible that it is going to encounter some problems in the process. When you are ready, it is time to load up the set of data from iris to your package. To make sure that this is as simple as possible, at least for this example, we are going to focus on just using the first two columns. This is the sepal length and also the sepal width to make it easier. The code that you need to use in order to make this happen includes: import numpy as np import pandas as pd import matplotlib.pyplot as plt from pandas import DataFrame from sklearn import datasets from sklearn.mixture import GaussianMixture # load the iris dataset iris = datasets.load_iris() # select first two columns X = iris.data[:, :2] # turn it into a dataframe d = pd.DataFrame(X) # plot the data plt.scatter(d[0], d[1]) When this part is done, we want to be able to take the data that we have and make sure that it is a mixture of a total of 3 Gaussians. That will take just a moment to complete, and then we can move on to do the clustering. This just means that we want to take each of the observations that we use and give them a label. During this process, we want to also make sure that we get the right amount of times that we want the iteration to go through for this kind of function so that it is able to converge. The code that you need to make this happen includes: gmm = GaussianMixture(n_components = 3) # Fit the GMM model for the dataset # which expresses the dataset as a # mixture of 3 Gaussian Distribution gmm.fit(d) # Assign a label to each sample labels = gmm.predict(d) d['labels'] = labels d0 = d[d['labels'] == 0] d1 = d[d['labels'] == 1] d2 = d[d['labels'] == 2] # plot three clusters in same plot plt.scatter(d0[0], d0[1], c = 'r') plt.scatter(d1[0], d1[1], c = 'yellow') plt.scatter(d2[0], d2[1], c = 'g') When this part of the code is done, it is time to go through and print off the converged log-likelihood value and the number of times the iteration has to go through so that we are certain the model will be able to converge in the end. The code that you are going to need to use in order to get this one to work includes: # print the converged log-likelihood value print(gmm.lower_bound_) # print the number of iterations needed # print the number of iterations needed # for the log-likelihood value to converge print(gmm.n_iter_) If you have typed this into the compiler the right way, you should get an answer that includes a 7 on the second line. this is going to tell you that you will need to work with 7 iterations here that have to happen to make the model above converge. You can go through and do some more iterations and go past the 7, there is not going to be a big change that we are going to see in this, and it is really just a waste of time to do more. As you can see, there are a lot of different types of unsupervised machine learning that you are able to use when it comes to the Scikit-Learn library. This one is going to help you to really explore all of the algorithms that come with Scikit-Learn and can make exploring a bit more in the world of machine learning that much easier. Part 4: The TensorFlow Library Chapter 12: What is the TensorFlow Library The next thing that we need to spend some time looking at is the TensorFlow Library. This is another option that comes from Python, and it can really help you to get some machine learning done. This one takes on a few different options of what you are able to do when it comes to machine learning, so it is definitely worth your time to learn how to use this option along with the algorithms that we talked about with the Scikit-Learn library. TensorFlow is another framework that you are able to work with in Python machine learning, and it is going to offer the programmer a few different features and tools to get your project done compared to the others. You will find that the framework that comes with TensorFlow is going to come from the Google company, and it is helpful when you are trying to work on some models that are deep learning related. This TensorFlow is going to rely on graphs of data flow for numerical computation. And it is able to make sure that some of the different things that you can do with machine learning are easier than ever before. TensorFlow is going to help us out in many different ways. First, it can help us with acquiring the data, training the models of machine learning that we are trying to use, helps to make predictions, and can even modify a few of the future results that we have to make them work more efficiently. Since each of these steps is going to be important when it comes to doing some machine learning, we can see how TensorFlow can come into our project and ensure we reach that completion that we want even better. First, let’s take a look at what TensorFlow is all about and some of the background that comes with this Python library. The Brain team from Google was the first to develop TensorFlow to use on large scale options of machine learning. It was developed in order to bring together different algorithms for both deep learning and machine learning, and it is going to make them more useful through what is known as a common metaphor. TensorFlow works along with the Python language that we talked about before. In addition to this, it is going to provide the users with a front-end API that is easy to use when working on a variety of building applications. It makes it a bit further, though. Even though you are able to work with TensorFlow and it matches up with the Python coding language while you do the coding and the algorithms, it is going to be able to change these up. All of the coding and the algorithms, it is going to be able to change these up. All of the applications that you use with the help of TensorFlow are going to be executed using the C++ language instead, giving them an even higher level of performance than before. TensorFlow can be used for a lot of different actions that you would need to do to make a machine learning project a success. Some of the things that you can do with this library, in particular, will include running, training, and building up the deep neural networks, doing some image recognition, working with neural networks that are recurrent, digit classification, natural language processing, and even word embedding. And this is just a few of the things that are available for a programmer to do when they work with TensorFlow with machine learning. Installing TensorFlow With this in mind, we need to take some time to learn how to install TensorFlow on a computer before we are able to use this library. Just like we did with ScikitLearn, we need to go through and set up the environment and everything else so that this library is going to work. You will enjoy that with this kind of library; it is already going to be set up with a few APIs for programming (we will take a look at these in more depth later on), including Rust, Go, C++ and Java to name a few. We are going to spend our time here looking at the way that the TensorFlow library is going to work on the Windows system, but the steps that you have to use to add this library to your other operating systems are going to be pretty much the same. Now, when you are ready to set up and download the TensorFlow library on your Windows computer, you will be able to go through two choices on how to download this particular library. You can choose either to work with the Anaconda program to get it done, or a pip is going to work well, too. The native pip is helpful because it takes all of the parts that go with the TensorFlow library and will make sure that it is installed on your system. And you get the added bonus of the system doing this for you without needing to have a virtual environment set up to get it done. However, this one may seem like the best choice, but it can come with some problems along the way. Installing the TensorFlow library using a pip can be a bit faster and doesn’t require that virtual environment, but it can come with some interference to the other things that you are doing with Python. Depending on what you plan to do with Python, this can be a problem so take that into consideration before starting. The good thing to remember here is that if you do choose to work with a pip and it doesn’t seem like it is going to interfere with what you are doing too much, you will be able to get the whole TensorFlow library to run with just one single command. And once you are done with this command, the whole library, and all of the parts that you need with it, are going to be set up and ready to use on the computer with just one command. And the pip even makes it easier for you to choose the directory that you would like to use to store the TensorFlow library for easier use. In addition to using the pip to help download and install the TensorFlow library, it is also possible for you to use the Anaconda program. This one is going to take a few more commands to get started, but it does prevent any interference from happening with the Python program, and it allows you to create a virtual environment that you can work with and test out without a ton of interference or other issues with what is on your computer. Though there are a few benefits to using the Anaconda program instead of a pip, it is often recommended that you install this program right along with a pip, rather than working with just the conda install. With this in mind, we will still show you some of the steps that it takes to just use the conda install on its own so you can do this if you choose. One more thing that we need to consider here before moving on is that you need to double-check which version of Python is working. Your version needs to be at Python 3.5 or higher for this to work for you. Python 3 uses the pip 3 program, and it is the best and most compatible when it comes to working with a TensorFlow install. Working with an older version is not going to work as well with this library and can cause some issues when you try to do some of your machine learning codings. You can work with either the CPU or the GPU version of this library based on what you are the most comfortable with. The first code below is the CPU version and the second code below is going to be the GPU version. pip 3 install – upgrade tensorflow pip 3 install – upgrade tensorflow-gpu Both of these commands are going to be helpful because they are going to ensure that the TensorFlow library is going to be installed on your Windows system. that the TensorFlow library is going to be installed on your Windows system. But another option that you are able to use is with the Anaconda package itself. The methods above were still working with the pip installs, but we talked about how there are a few drawbacks when it comes to this one. Pipis the program that is already installed automatically when you install Python onto your system as well. But you may find out quickly that Anaconda is not. This means that if you want to ensure that you can get TensorFlow to install with this, then you need to first install the Anaconda program. To do this, just go to the website for Anaconda and then follow the instructions that come up to help you get it done. Once you have had the time to install the Anaconda program, then you will notice that within the files there is going to be a package that is known as conda. This is a good package to explore a bit at this time because it is going to be the part that helps you manage the installation packages, and it is helpful when it is time to manage the virtual environment. To help you get the access that you need with this package, you can just start up Anaconda and it will be there. When Anaconda is open, you can go to the main screen on Windows, click the Start button, and then choose All programs from here. You need to go through and expand things out in order to look inside of Anaconda at the files that are there. You can then click on the prompt that is there for Anaconda and then get that to launch on your screen. If you wish to, it is possible to see the details of this package by opening the command line and writing in “conda info”. This allows you to see some more of the details that you need about the package and the package manager. The virtual environment that we talk about with the Anaconda program is going to be pretty simple to use, and it is pretty much just an isolated copy of Python. It will come with all of the capabilities that you need to maintain all of the files that you use, along with the directories and the paths that go with it too. This is going to be helpful because it allows you to do all of your coding inside the Python program, and allows you to add in some different libraries that are associated with Python if you choose. These virtual environments may take a bit of time to adjust to and get used to, but they are good for working on machine learning because they give you an opportunity to isolate a project, and can help you to do some coding, without all of the potential problems that come with dependencies and version requirements. Everything you do in the virtual environment is going to be on its own, so you can experiment and see what works and what doesn’t, without messing up other can experiment and see what works and what doesn’t, without messing up other parts of the code. From here, our goal is to take the Anaconda program and get it to work on creating the virtual environment that we want so that the package from TensorFlow is going to work properly. The conda command is going to come into play here again to make this happen. Since we are going through the steps that are needed to create a brand new environment now, we will need to name it tensorenviron, and then the rest of the syntax to help us get this new environment created includes: conda create -n tensorenvrion After you type this code into the compiler, the program is going to stop and ask you whether you want to actually create the new environment, or if you would rather cancel the work that you are currently doing. This is where we are going to type in the “y” key and then hit enter so that the environment is actually created. The installation may take a few minutes as the compiler completes the environment for you. Once the new environment is created, you have to go through the process of actually activating it. Without this activation in place, you will not have the environment ready to go for you. You just need to use the command of “activate” to start and then list out the name of any environment that you want to work with to activate. Since we used the name of tensorenviron earlier, you will want to use this in your code as well. An example of how this is going to look includes: Activate tensorenviron Now that you have been able to activate the TensorFlow environment, it is time to go ahead and make sure that the package for TensorFlow is going to be installed too. You are able to do this by using the command below: Conda install tensorflow When you get to this point, you will be presented with a list of all the packages that are available to install in case you want to add in a few others along with TensorFlow. You can then decide if you want to install one or more of these packages, or if you want to just stick with TensorFlow for right now. Make sure to agree that you want to do this and continue through the process. to agree that you want to do this and continue through the process. The installation of this library is going to get to work right away. But it is going to be a process that takes some time so just let it go without trying to backspace or restart. The speed of your internet is going to make a big determinant of whether you will see this take a long time or not. Soon though, the installation process for this library is going to be all done, and you can then go through and see if this installation process was successful or if you need to fix some things. The good news is the checking phase is going to be easy to work with because you can just use the import statement of Python to set it up. This statement that we are writing is then going to go through the regular terminal that we have with Python. If you are still working here, like you should, with the prompt from Anaconda, then you would be able to hit enter after typing in the word python. This will make sure that you are inside the terminal that you need for Python so you can get started. Once you are in the right terminal for this, type in the code below to help us get this done and make sure that TensorFlow is imported and ready to go: import tensorflow as tf At this point, the program should be on your computer and ready to go and we can move on to the rest of the guidebook and see some of the neat things that you can do with this library. There may be a chance that the TensorFlow package didn’t end up going through the way that it should. If this is true for you, then the compiler is going to present you with an error message for you to read through and you need to go back and make sure the code has been written in the right format along the way. The good news is if you finish doing this line of code above and you don’t get an error message at all, then this means that you have set up the TensorFlow package the right way and it is ready to use! With that said, we need to explore some more of the options and algorithms that a programmer can do when it comes to using the TensorFlow library and getting to learn how it works with the different machine learning projects you want to do. Chapter 13: Working with the High-Level APIs The first thing that we need to take a look at when working with the TensorFlow library is some of the high-level APIs that can help you get the work done. Some of the best high-level APIs that work well with this kind of machine learning library will include: Keras The first high-level API that we are going to look at that comes from TensorFlow is known as Keras. This is a high-level neural networks API, and it is written in Python while being able to run on top of other programs like Theano, CNTK, and TensorFlow. It was originally developed so that it would have a focus on enabling fast experimentation. This means that it is going to be able to go from idea to the result as quickly as possible, with the least amount of delay possible. And it does this by making that strong and sturdy research is used. You would want to use Keras if you want to work with a library for deep learning that can: 1. Allow for fast and easy prototyping. This is going to happen with user-friendliness, extensibility, and modularity. 2. Support both types of networks including recurrent and convolutional ones, as well as any kind of combination of both these networks. 3. Run seamlessly with both GPU and CPU. There are a few different principles that happen with Keras that help to guide it and make sure that it is going to work the way that it should. First off, Keras was designed in order to be user-friendly. This one is an API that has been designed to be used by humans, rather than one that was designed to be used by machines. And because of this shift of focus, it makes sure that the experience of the user is front and center. To help make sure that it is friendly for a user to work with, you are going to see that it has the best practices in place to reduce the amount of cognitive load, it offers APIs that are simple and consistent, and it is going to minimize how many actions are required by the user for most of the common problems that they are using this API for. The next guiding principle that comes with Keras is the modularity. The model is going to be understood as a sequence or a graph of a standalone. The is going to be understood as a sequence or a graph of a standalone. The configurable models are going to be plugged in together with Keras, cutting out as many restrictions as they can. In particular, it is going to work with the standalone modules such as regularization schemes, activation functions, initialization schemes, optimizers, cost functions, and neural networks and can help us to combine them together so that the programmer is able to work on any kind of model they need for their project. Keras is also able to help out because it is easy to extend out. New modules are going to be really easy to add in keras, and you can just use them as new functions and classes. In addition, the existing modules are going to provide us with a lot of the examples that we need. To make sure that we are able to be totally expressive in the new modules that we are creating, we can use keras for advanced research in the process. And finally, another guiding principle that makes Keras easy to work with and one of the best high-level API’s to use with TensorFlow is that it can work with Python. All of the modules are going to be described with the help of a Python code. And since we have already seen how great this code can be, how compact it is, and how easy this coding language is even for beginners, we can see why it is such a good thing to use the Python language along with the Keras API. Eager Execution The next thing that we are going to take a look at here is the eager execution that comes with TensorFlow. This is going to be an imperative programming environment for you to use. It is helpful because it is able to come in and evaluate some of the operations that are there right away without delay, without needing to build up graphs in the process. The operations are also going to be able to return concrete values instead of constructing the computational graph that you will be able to run later. This can be so good for the eager execution because it helps you to get started with the use of TensorFlow and some of the debugging models that come with it, while also reducing some of the boilerplate that comes with it. The eager execution is going to be a machine learning platform that is flexible and great for experimentation and research all at the same time. Some of the things that you are able to see when it comes to eager execution and what it can provide to the programmer includes: 1. An interface that is intuitive: You will be able to structure your code in a natural manner while using the structures for data in Python. You are going to be able to iterate quickly on small data and small models. 2. Easier debugging. The call ops are going to directly inspect the running models and test some of the changes that happen. You can also see that eager execution is going to use some of the standard tools for debugging from Python in order to get error reporting that happens right away. 3. Natural control flow. You will be able to benefit from using the control flow from Python rather than the control flow of graphs. This is going to help us simplify some of the specifications that happen in dynamic models with this system. Estimators Estimators are going to be explored in more detail when it comes to TensorFlow, but it is definitely one of the high-level APIs found in this that can make it easier to do some of the programmings that are needed with machine learning. Estimators are going to help us with a few different actions to make life easier including export for serving, prediction, evaluation, and training. You get the benefit of working with either a premade estimator to make things easier, or, if none of the pre-made options work for you, you can create your own estimator that is custom and works for your needs. All of the estimators, whether they are custom or pre-made, are going to be considered classes that are based on the class of tf.estimator.Estimator. There are going to be a few benefits that you are going to enjoy when it comes to using these estimators. Some of the benefits that you are able to get with these estimators are going to include: 1. You are able to run models based on the Estimator on a local host or on a distributed multi-server environment, and you do not have to change up the model. You can also run it on the TPU, GPU, and CPU without needing to go through and recode the model. 2. Estimators are going to help make it easier to share any implementations that you have between model developers. 3. You can develop a great model that has code that is high level intuitive with the Estimators. This is going to make it a lot easier to create models with the Estimators than it is to create these models with any of the lower-level APIs on TensorFlow. 4. You will find that since the Estimators can be built using the tf.keras.layers, it is easier to customize the process as you want. 5. Estimators are going to build up a graph for you. 6. Estimators are a great place to have safe distributed raining loops. These are going to help you have more control over how and when to do some of the following tasks: a. Build up a new graph b. Save some of the summaries to the TensorBoard c. Create a new checkpoint file and then recover from some of the failures. d. Handles the exceptions that come up e. Can load up the data that you are using. f. Initializes any of the variables that you are using. Importing data If you do any kind of work with TensorFlow, you will find that working with the feed-dict option to pass information inside of the TensorFlow library is one of the worst possible ways to make the information go through. This is slow, and it is going to end up with errors more often than not. The best for a programmer to get the data over to the models in TensorFlow is going to be an input pipeline in order to make sure that the GPU you are working with doesn’t have to wait for some of the new stuff to come in. The best part of this is that there is a built-in API inside of the TensorFlow library known as Dataset that can be used to make this task easier. We are going to take some time to work through all of the steps that the library needs so we can make our own input pipeline and then we can also look at some of the codings that are needed to make sure our data is fed into the model in the most efficient manner possible. Now, we have to look at the three main steps that are needed in order to make sure we can use the Dataset API with TensorFlow. These three steps are going to include: 1. Importing the data. We will be able to do this by creating a Dataset instance with the data we want to use. 2. Create our Iterator. When we use the dataset that we created in order to make an Iterator instance, we can then have this iterate through the whole dataset. 3. Consuming data. Then we can take that iterator that we created in order to get the elements that we need from the dataset to feed the model. Looking at the steps above, we need to start out with the importing data part of this process. This means that we need to start out by picking out the data that is needed to place inside the dataset. Often you will want to work with an array from NumPy and then pass it over to TensorFlow. The code that you can use to make this happen includes: # create a random vector of shape (100,2) x = np.random.sample((100,2)) # make a dataset from a numpy array dataset = tf.data.Dataset.from_tensor_slices(x) It is also possible to go through and pass more than one array from NumPy. One of the examples of using this can include when you have a few types of data that have been split up between labels and features. Some of the code that you would want to use with this will include the following: features, labels = (np.random.sample((100,2)), np.random.sample((100,1))) dataset = tf.data.Dataset.from_tensor_slices((features,labels)) Another method that you are able to use in order to initialize your own dataset is through a generator. You can often be fine using the NumPy from earlier, but this method is going to be pretty useful any time that you are going to work with a type of array that has element lengths that are different, such as a sequence. The code that you are able to use to make this one happen includes: # from generator sequence = np.array([[[1]],[[2],[3]],[[3],[4],[5]]])def generator(): for el in sequence: yield eldataset = tf.data.Dataset().batch(1).from_generator(generator, output_types= tf.int64, output_shapes=(tf.TensorShape([None, 1])))iter = dataset.make_initializable_iterator() el = iter.get_next()with tf.Session() as sess: sess.run(iter.initializer) print(sess.run(el)) print(sess.run(el)) print(sess.run(el)) After you have been able to initialize the dataset that you want to work with, it is time to go through and create the iterator. This is going to allow us to get our data back when we want, and it can iterate through the dataset while retrieving the values of data in real-time. You will find that while working with this one is going to provide us with four different types of iterators to use. These four iterator types are going to include: 1. One-shot. This is one that is able to iterate through the dataset just one time, and you won’t be able to feed in value to it. 2. Initializable. This is something that you are able to change dynamically, calling up the operation for initialization all while making sure that the new data is passed on to the function of feed_dict. This one is nice because it is pretty much like a bucket that you are able to fill up with the stuff you want to use. 3. Reinitialize. This is going to be initialized from a completely different Dataset. It is going to be the most helpful to a programmer when they are working on any kind of training of their set of data where they would like to transform in an additional manner. This could include shuffling and testing a dataset. This is going to be similar to using a crane with a tower in order to go over and select the container that you want to use. 4. Feedable: This kind is going to be used in order to help you figure out which of the iterators to use. It is going to be similar to a tower crane that is able to select which of the cranes in your arsenal then get to go through and pick out the container that will work for you. This one is often not needed because it adds in too many steps. And finally, we are able to work with the idea of consuming the data that we need. In this one, we need to be able to pass the data to a model to make it work, and all that we need to do to see this happen is to pass on the tensors generated from get_next() In the little bit of code that we have below, we are going to have a Dataset that is going to come with two arrays from NumPy, and we will still stick with the example that we used in the first section. We need to take notice that it is time to example that we used in the first section. We need to take notice that it is time to wrap up the function of .random.sample in another array through NumPy so that you can add in a dimension. This means that we need to batch the data that we want to use. The example of a code that we will need to use to make this happen is going to be shown below: # using two numpy arrays features, labels = (np.array([np.random.sample((100,2))]), np.array([np.random.sample((100,1))]))dataset = tf.data.Dataset.from_tensor_slices((features,labels)).repeat().batch(BATCH_SIZE) When we are done with this one, it is time to create the iterator that we want to use with this. The code for getting this one done is below: iter = dataset.make_one_shot_iterator() x, y = iter.get_next() Now we are going to have a few different things come into play here that we are able to use to make sure we get the results that we want in the process. The first part is going to allow us to make our own model. In this example, we are going to work with what is known as a simple neural network. After we are done with that part, the second part of the code that we are going to work with, the part that says EPOCHS, is going to be how we are able to use these tensors in a direct manner, the ones that the model is going to give us from the function of iter.get_next(), as the input for the first layer. We can then take this information and use it for our loss function and our labels as well. This is basically how we are going to make sure that all of the parts are wrapped in together. The codes that you need to make both of these parts happen will include: # make a simple model net = tf.layers.dense(x, 8) # pass the first value from iter.get_next() as input net = tf.layers.dense(net, 8) prediction = tf.layers.dense(net, 1)loss = tf.losses.mean_squared_error(prediction, y) # pass the second value from iter.get_net() as label train_op = tf.train.AdamOptimizer().minimize(loss) EPOCHS = 10 BATCH_SIZE = 16 # using two numpy arrays features, labels = (np.array([np.random.sample((100,2))]), np.array([np.random.sample((100,1))]))dataset = tf.data.Dataset.from_tensor_slices((features,labels)).repeat().batch(BATCH_SIZE)iter = dataset.make_one_shot_iterator() x, y = iter.get_next()# make a simple model net = tf.layers.dense(x, 8, activation=tf.tanh) # pass the first value from iter.get_next() as input net = tf.layers.dense(net, 8, activation=tf.tanh) prediction = tf.layers.dense(net, 1, activation=tf.tanh)loss = tf.losses.mean_squared_error(prediction, y) # pass the second value from iter.get_net() as label train_op = tf.train.AdamOptimizer().minimize(loss)with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(EPOCHS): _, loss_value = sess.run([train_op, loss]) print("Iter: {}, Loss: {:.4f}".format(i, loss_value)) This is a lot of coding, but take some time to write it all down and see what results you are able to get in the end. It is a good way to practice some of the Python codings that we talked about earlier and makes sure that we are able to see more when it comes to how to use the TensorFlow library with machine learning as well. Chapter 14: The Estimators in TensorFlow Now that we have had some time to look at the high-level APIs that you are able to focus on, it is time to take a quick look at some of the Estimators that you are able to work with when it comes to this library as well. These estimators can provide you with a lot of different benefits, and you are going to enjoy the freedom and more that they provide. Let’s dive in so we can get a better idea of how these Estimators work and why you would want to use them in TensorFlow. The Pre-Made Estimator These Estimators are going to make it easier for you to work at a level that is much higher conceptually than some of the base APIs in this library. You do not have to spend time creating the computational graphs or even sessions like before because the Estimator is going to take care of all this for you. This makes it easier to do some of the programmings that you want. The main thing that we need to take a look at here is some of the structural parts of the Estimators program. A TensorFlow program that is going to rely on some of these pre-made Estimators is going to include the four steps below to make things easier: 1. You will write out one or more dataset importing functions. For example, it is possible that you would create your first function so that it is going to import the set that you are using for training, and then the second function is done in order to import the settings for the test. Each of these needs to bring back two objects for you, including: a. A Tensor that has at least one label, but can come in with more than one label. b. A dictionary where we can take the keys and turn them into the values and names that we want to feature, and which become our tensors. These tensors contain the corresponding data that you need for the features. 2. Define the columns of the feature. Each of the tf.feature_column will identify a feature name, the type, and any of the pre-processing input that you need to use. 3. Instantiate the pre-made Estimator that is relevant. 4. Call a training, inference or evaluation method based on what you want to do with this particular code. So, this brings up the question of why you would want to work with these premade estimators in the first place. These are going to encode some of the best practices to make your work easier and can help you to make sure that the coding gets done right. For someone who wants to write proper codes in Python and machine learning, but who are just getting started, then the pre-made Estimator is the best option to go with. Checkpoint The next thing that we need to look at in TensorFlow is the idea of a checkpoint. When you are looking at a Keras doc, you will find that there are several explanations of what this term is all about, but some of the things that you can remember about the checkpoint here include: 1. It is going to be a kind of architecture of a model that will allow you a chance to recreate the model. 2. It is going to be the weights of the model. 3. It could be a type of training configuration that you can use that includes epochs, optimizer, loss, and some other important metainformation. 4. It is the state of the optimizer, which is going to allow us to resume the training right where we left off. Basically, the checkpoint is going to contain the information that is needed to save your current state in the experiment. This allows you to take a break or check on other parts of the code. Then you can go back and resume the training right where you left it rather than having to check back the whole time or start from the beginning again. Making your Own Custom Estimator The next thing that you can consider working on is creating one of your own custom estimators. While many beginners of TensorFlow are going to focus on using the pre-made estimators in order to get the work done and to ensure that they are getting things done in the right manner, there are going to be times when you need to create our very own estimator along the way that is then able to perform way that you want for your program. At the heart of every type of estimator, whether you are working with a custom or a pre-made estimator, you are going to find the model function. This is going to be the specific method that is going to build up the needed graphs for all the to be the specific method that is going to build up the needed graphs for all the training, the evaluation, and the predictions that are needed for machine learning. If you happen to be using an estimator that was already made, this means that someone else was able to go through and implement all of the functions that you need for that model to work. This makes life easier, but when you are working with a custom estimator, you will find that you are going to be the one responsible for writing out the functions for that model all on your own. This takes a bit more time and effort in order to complete, but it is going to be the way things to go the right way with the custom estimator. When you are working on one of these custom estimators for your own needs in machine learning, there is a certain workflow that is going to be important to make things easier. The workflow that is recommended in most cases for doing this includes: 1. Making the assumption that there is a suitable estimator that is in existence, you will use this to help build up the first model that you have, and then you can also use it to help establish a good baseline to work from. 2. You can then build up and test the overall pipeline that you are working on, including any of the integrity and the reliability of the data using the Estimator that was pre-made for this. 3. If the suitable pre-made Estimator was available, you would want to go through and run a few experiments to see which of the options is going to work the best and give you the best results. 4. If you need it, and you think that it is going to improve your model better than the pre-made estimators are able to do, you can then consider going through and building up your own customer Estimator in the process. Creating a custom estimator takes some time and effort, and it is not always very easy for a beginner in machine learning to do. This is why many programmers like to spend their time working with some of the pre-made options to help them get started in the process. But when it is needed, and when the other options available are not going to help suit your needs, then working with a custom estimator can be your best choice as well. Chapter 15: Understanding the Low-Level APIs In addition to some of the estimators that we talked about before, and the highlevel APIs that we talked about a few chapters before, it is also time to fully understand a bit more about the lower-level APIs that make the TensorFlow library work the way that it is meant too. These low-level APIs are just as important to the functioning of the machine learning algorithms that you want to explore, so being able to understand them and how they work can be super important as well. Some of the different low-level APIs that we need to focus on include: What is a Tensor The first thing that we are going to take a look at when following the low-level APIs in TensorFlow is the idea of a tensor. This is basically going to be a type of mathematical object, and it is going to be represented by an array of components that are seen as functions of the coordinates of space. As we discussed earlier, Google was able to create its own framework for machine learning, known as TensorFlow, and it is going to use tensors simply because they will allow us to work with neural networks that are highly scalable. Google was able to surprise the analysts of the industry when it started to open source the machine learning library of TensorFlow, but this may not have been such a good idea because this soon became the go-to machine learning framework out there for developers of all backgrounds. In the beginning, though, TensorFlow was just used internally by Google, and it ends up helping out the company even more if a lot of different developers learn how to use this library. There are different ways to define one of these tensors. They can be defined as simply a single point or even a collection of isolated points of space, and sometimes it can even be something that is defined over a continuum of different points. When we look at the latter time, the elements of the tensor are going to be functions of positions, and then the tensor comes in and forms its own tensor field. This sounds really complicated, but it simply means that the tensor is going to be defined at all of the points in that space, rather than just at the one point or the one collection of points. It is possible that the tensor is going to consist of just one single number. If this happens, then the tensor of order zero is going to be the way that it is referred to, or it could be known as a scalar. A good example of a scalar is going to be the or it could be known as a scalar. A good example of a scalar is going to be the mass that we see in an object. And a scalar field is going to be seen as the density of a fluid as a function of position for example. Another complicated tensor that we need to focus on is going to be known as the tensor of order one, which is often referred to as a vector instead. Just as the scalar-tensor, or any of the other tensors that you are going to work with, this one could be defined at just one point or several points, and it can vary continuously from one point to another point in order to define the field of the vector. So, if we are in a three-dimensional space that is normal, the vector is going to come in with three components of some kind. Or if you are working with a four-dimensional space or time, then the vector is going to come in with four components and so on with this same idea. The Variables The TensorFlow variable is going to be one of the best ways to represent a shared, persistent state manipulated by our program. The variables can be manipulated with the help of the class tf.Variable. This is going to represent the tensor which has a value that is going to be changed when you choose to run ops on it. Unlike the tf.Tensor objects, this variable is going to exist outside of just one call or one session. Internally, this class is going to be able to store persistent tensor. The specific ops that you are in can be helpful at allowing you to read and modify the values of the tensor. And then you can see that the modifications that you use will show up through a lot of different sessions like this. The reason that this is so important is that if you have several workers who are able to work on your specific session, they will still be able to see the same values any time that they look at the tf.Variable. The best method that you are able to use in order to create one of these variables is to bring out the function of tf.get_variable. This specific function is going to ask you to take a bit of time to specify the name of the variable that you want to use. Make sure that you are focusing on a variable name that makes sense for what you are doing because this name is going to be brought up again later on and can show up with other replicas to help you access this variable later on. It is also important to name this variable right so you can bring it out when exporting and checkpointing the models. The tf.get_variable is also going to come into use because it allows the programmer to go through a variable of the same name that was created earlier programmer to go through a variable of the same name that was created earlier and reuse it, making it easier for the programmer to define the models that are going to reuse layers again. Graphs and Sessions TensorFlow is going to work with what is known as a dataflow graph in order to help us better represent the computations that we are doing when it comes to the different dependencies that will show up in all of the operations that we work with. What this ends up doing is leading us to a low-level programming model to use. There are a number of steps that we need to use with this one including defining the dataflow graph, then create a session where you are able to run one or more of the parts that come on that graph across a set of devices that can be both local and remote when it is ready. If you plan to work with TensorFlow a bit with some low-level programming, then this section is going to help you. Some of the Higher Level APIs are going to be useful at times, but they are going to hide some of the details of the sessions and the graphs from the user at the end. But this one helps to bring them out a bit and can help us to see better how the APIs are going to be implemented. Now we need to take a look at the dataflow graphs and why we would use them. The dataflow is going to be a common model in programming to use when we are doing parallel computing. In one of these graphs, the nodes are going to be there to represent some of the units that we need for computation, and then the edges are there to represent some of the data that was consumed, or some of the data that was produced through computation. For example, when we are doing a TensorFlow graph, the tr.matmul operation would then be able to go back and correspond with just one of the nodes, one that has two edges that are income, which will be our multiplied matrices here, and would leave us with just one edge outgoing, which is going to end up being the result of the multiplication that we just did. This is going to be beneficial to use, and the TensorFlow library is going to be able to leverage these benefits when it is ready to execute out the programs that it is ready to use. Some of the ways that the TensorFlow library is going to be able to do this includes: 1. Parallelism: By being able to use explicit edges to help show the different dependencies that will show up in our operations can ensure that the system is able to see which operations that it is going to be able to work with, and which of these can be done in parallel 2. Distributed execution: By being able to work with the edges that are explicit with this part, we can see that our values are going to flow right in the middle of the operations, it is then possible to use this kind of library to partition the program through a lot of different devices, includes TPUs, GPUs, and CPUs, attached to a lot of different types of machines. The library is going to make sure to insert the necessary coordination and communication between all of these devices. 3. Compilation. The XLA compiler that comes with TensorFlow is going to use the information that comes inside the dataflow graph. This is done in order to generate faster code because it is able to fuse together some of the operations that are adjacent. 4. Portability: And finally, we are going to take a look at the dataflow graph and how it is going to help out by being language independent. This means that you are able to build it up in the Python language, store it in what is known as SavedModel, and then bring it back later as a C++ program if you prefer. The next kind of graph that we need to take a look at is known as the tf.Graph. This is an important graph because it is going to hold onto two important types of information that are really important to us. The first bit of information is going to include the structure of the graph. The edges and the nodes that show up in the graph will be there, and they indicate how some of the individual operations will be composed together, but this graph doesn’t come with a prediction on how you should use them. You can think about the structure of this graph like assembly code. If you are able to stop and take some time to inspect what is there, it is going to convey some information that is useful, but it is still not going to contain all of the contexts that are conveyed in the source code. The second kind of information that you will see with the tf.Graph is going to be the graph collection. TensorFlow is going to provide you with a lot of general mechanisms that are going to be used to store collections of metadata in your tf.Graph. The function of tf.add_to_collection will make it easier for you to revert your code back to the object list that has that key. Then you are able to use the tf.get_collection in order to look up all of the objects that have some kind of the tf.get_collection in order to look up all of the objects that have some kind of association back to the key. You will find that a lot of the different parts that come with this kind of library will use this facility. So, if you decide to create the tf.Variable, it is going to be added to the collections of trainable and global variables. Then, when you come back later in order to create a tf.train.Optimize and tf.train.Sver, the variables that come with these specific collections are going to be the arguments that you use as a default. Another thing that we can take a look at when we are working with some of the graphs that come with this library is how to name operations. A tf.Graph is going to define a new namespace for the tf.Opearation objects it contains. In TensorFlow, it is going to automatically choose a new and unique name for each of the operations that are in the graph, but you can override this and can pick out your own names, ones that are descriptive and easier to make the program to debug and read as you need. There are a few ways that you can go through and override the name that TensorFlow decides to name the operation so that it makes more sense for you. One of the easiest methods to do this though is going to be the function of tf.name_scope. This one is going to make it easier to add in a name scope prefix into all of the operations that you created, as long as it reaches the particular context. The current prefix for the name scope is going to be “/”. You can then just add in the right name with this to make sense for your needs. If you find that a specific name scope has been used in that specific context, you can append on “_1” or “_2” and so on to make this work. A good look at how you would do this in coding would be the" Save and Restore The next thing that we need to take a look at is how to save and restore some of the variables that we are working on in our code. There is likely to be some point while working on a machine learning algorithm where you will want to save and restore the variables that you are working on. This allows you to come back to them later without them getting deleted or lost or you have to come back and restart all of the work that you are doing. So, let’s get started. As you can guess at this point, the variables that show up in the TensorFlow library are going to be one of the best ways for us to represent the shared and persistent state of manipulation that happens in your program. The tf.train.Saver constructor is going to give us the opportunity to add in the restore and save options that you need with that graph for all of the list, or you can choose to work with a more specified list that you can control, which includes the variables that come with the graph. The Saver object is then going to come in handy here because it will then provide us with some methods to run these options while offering us a way to specify the paths that will lead us over to the checkpoint files, which will make it easier to do any writing on them later on. We will use the Saver option often because it is able to restore any of the variables that we took the time to define in our model earlier. If you plan to load up a model without having a good idea of how to build your graph up, such as when you are writing up a generic program to help load the models, then you need to make sure that you definitely go through saving and restoring these models, so you don’t lose the work. So, the first thing that we are going to want to focus on here is how to save the variables that we are doing with TensorFlow. This can be done when we choose to create a Sver with the function of tf.train.Saver()to make sure that we are then able to manage some of the variables that this model will bring up for us. The following code can show us how to call the tf.train.Saver.save method to ensure that the variables we have, are saved over to the checkpoint files when we want. #) The next thing that we need to take a look at is how to restore some of the variables that we want to work with inside the code. You will find that the object of the tf.train.Saver is not only going to be able to save up the variables that we want to put into the checkpoint files, but it can come into use when we want to restore the variables. Before we do this though, we need to note that when we go through and restore some of the variables we want to use, we do not need to stop and do any initializing them ahead of time to make this happen. A good example of the code that we are able to use here to make this happen includes:()) Working with some of the lower-level APIs that come with TensorFlow in order to help you to write some of the different machine learning codes that you want to create can make things easier. There is just so much that you are able to do with TensorFlow, and missing out on any of it can make machine learning that much harder. Take some time to look at the different things that you are able to do with the lower-level APIs inside of this library. Conclusion Thank you for making it through to the end of Python Machine Learning. Let’s hope it was informative and able to provide you with all of the tools you need to achieve your goals whatever they may be. Python and Machine Learning may be the answer that you are looking for when it comes to all of these needs and more. It is a simple process that can teach your machine how to learn on its own, similar to what the human mind can do, but much faster and more efficient. It has been a game-changer in many industries, and this guidebook tried to show you the exact steps that you can take to make this happen. Some of the topics discussed in this guidebook concerning Python Machine Learning include: What machine learning is all about? The different types of machine learning that you can work with. How AI and machine learning are the same and how they are different A Python crash course that includes how to set up Python on your computer, writing conditional statements, and raising your own exceptions. What is Scikit-Learn and how to use this Python library? Some of the Supervised and unsupervised machine learning that you can do with the Scikit-Learn library. Working with the TensorFlow Library. Working on the high level and low-level APIs in this library What estimators have to do with TensorFlow and how you can make these work for your needs. There is just so much that a programmer is able to do when it comes to using machine learning in their coding, and when you add it together with the Python coding language, you can take it even further, even as a beginner. The next step is to start putting some of the knowledge that we discussed in this guidebook to good use. There are a lot of great things that you are able to do when it comes to machine learning, and when we are able to combine it together with the Python language, there is nothing that we can’t do when it comes to training our machine or our computer. This guidebook took some time to explore a lot of the different things that you are able to do when it comes to Python machine learning. We looked at what machine learning is all about, how to work with it, and even a crash course on using the Python language for the first time. Once that was done, we moved right into combining the two of these together to work with a variety of Python libraries to get the work done. If you have ever wanted to learn how to work with the Python coding language, or you want to see what machine learning is able to do for you, then this guidebook is the ultimate tool that you need! Take a chance to read through it and see just how powerful Python machine learning can be for you.
https://dokumen.pub/python-machine-learning-a-step-by-step-guide-to-scikit-learn-and-tensorflow-includes-a-python-programming-crash-course.html
CC-MAIN-2022-33
refinedweb
27,406
70.36
On Sat, 2004-10-09 at 20:48, Philip Martin wrote: > On the 1.0.x branch r10357 uses a non-public symbol from libsvn_subr, > namely svn_config__enumerate_sections, and that symbol is not present > on the 1.1.x branch. If an third-party application did this we would > claim that it was breaking the rules and could not expect such a > symbol to be present in 1.1.x. Is mod_authz_svn allowed to get away > with it because it is part of Subversion? If so we need to explain to > distributers that they need to ensure that the 1.1.x libraries > conflict with the 1.0.x modules. The logic we applied (at my urging) is that we should feel free to use internal symbols between libraries when necessary, because we have no reason to expect that users won't have a matched set of libraries and no reason to want to make that work. We didn't consider the logistical issue that people might upgrade the libraries without upgrading the httpd modules. (Or downgrade them, within a particular x.y line.) I tend to believe that if someone misses the httpd modules in an upgrade or downgrade, it was probably by mistake, and it's almost better that we catch the mistake than not (though an obscure linking error is not the best way to do so). But perhaps we should consider the httpd modules to be like applications, and make them follow the strict rules as if they were external programs. Another reason not to have this class of Subversion-private (but not library-private) symbols is that it means we can't use platform-specific mechanisms for protecting the library symbol namespace. So perhaps we should wean ourselves of the practice, although it does give us a lot of latitude at times. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org For additional commands, e-mail: dev-help@subversion.tigris.org Received on Sun Oct 10 18:12:20 2004 This is an archived mail posted to the Subversion Dev mailing list.
http://svn.haxx.se/dev/archive-2004-10/0453.shtml
CC-MAIN-2014-42
refinedweb
346
60.24
import "github.com/cockroachdb/cockroach/pkg/workload" Package workload provides an abstraction for generators of sql query loads (and requisite initial data) as well as tools for working with these generators. connection.go csv.go driver.go pgx_helpers.go sql_runner.go stats.go workload.go AutoStatsName is copied from stats.AutoStatsName to avoid pulling in a dependency on sql/stats. ApproxDatumSize returns the canonical size of a datum as returned from a call to `Table.InitialRowFn`. NB: These datums end up getting serialized in different ways, which means there's no one size that will be correct for all of them. CSVMux returns a mux over http handers for csv data in all tables in the given generators. ColBatchToRows materializes the columnar data in a coldata.Batch into rows. DistinctCount returns the expected number of distinct values in a column with rowCount rows, given that the values are chosen from maxDistinctCount possible values using uniform random sampling with replacement. HandleCSV configures a Generator with url params and outputs the data for a single Table as a CSV (optionally limiting the rows via `row-start` and `row-end` params). It is intended for use in implementing a `net/http.Handler`. NewCSVRowsReader returns an io.Reader that outputs the initial data of the given table as CSVs. If batchEnd is the zero-value it defaults to the end of the table. Register is a hook for init-time registration of Generator implementations. This allows only the necessary generators to be compiled into a given binary. SanitizeUrls verifies that the give SQL connection strings have the correct SQL database set, rewriting them in place if necessary. This database name is returned. func WriteCSVRows( ctx context.Context, w io.Writer, table Table, rowStart, rowEnd int, sizeBytesLimit int64, ) (rowBatchIdx int, err error) WriteCSVRows writes the specified table rows as a csv. If sizeBytesLimit is > 0, it will be used as an approximate upper bound for how much to write. The next rowStart is returned (so last row written + 1). type BatchedTuples struct { // NumBatches is the number of batches of tuples. NumBatches int // FillBatch is a function to deterministically compute a columnar-batch of // tuples given its index. // // To save allocations, the Vecs in the passed Batch are reused when possible, // so the results of this call are invalidated the next time the same Batch is // passed to FillBatch. Ditto the ByteAllocator, which can be reset in between // calls. If a caller needs the Batch and its contents to be long lived, // simply pass a new Batch to each call and don't reset the ByteAllocator. FillBatch func(int, coldata.Batch, *bufalloc.ByteAllocator) } BatchedTuples is a generic generator of tuples (SQL rows, PKs to split at, etc). Tuples are generated in batches of arbitrary size. Each batch has an index in `[0,NumBatches)` and a batch can be generated given only its index. func Tuples(count int, fn func(int) []interface{}) BatchedTuples Tuples is like TypedTuples except that it tries to guess the type of each datum. However, if the function ever returns nil for one of the datums, you need to use TypedTuples instead and specify the coltypes. TypedTuples returns a BatchedTuples where each batch has size 1. It's intended to be easier to use than directly specifying a BatchedTuples, but the tradeoff is some bit of performance. If colTypes is nil, an attempt is made to infer them. func (b BatchedTuples) BatchRows(batchIdx int) [][]interface{} BatchRows is a function to deterministically compute a row-batch of tuples given its index. BatchRows doesn't attempt any reuse and so is allocation heavy. In performance-critical code, FillBatch should be used directly, instead. type ConnFlags struct { *pflag.FlagSet DBOverride string Concurrency int // Method for issuing queries; see SQLRunner. Method string } ConnFlags is helper of common flags that are relevant to QueryLoads. NewConnFlags returns an initialized ConnFlags. type FlagMeta struct { // RuntimeOnly may be set to true only if the corresponding flag has no // impact on the behavior of any Tables in this workload. RuntimeOnly bool // CheckConsistencyOnly is expected to be true only if the corresponding // flag only has an effect on the CheckConsistency hook. CheckConsistencyOnly bool } FlagMeta is metadata about a workload flag. type Flags struct { *pflag.FlagSet // Meta is keyed by flag name and may be nil if no metadata is needed. Meta map[string]FlagMeta } Flags is a container for flags and associated metadata. Flagser returns the flags this Generator is configured with. Any randomness in the Generator must be deterministic from these options so that table data initialization, query work, etc can be distributed by sending only these flags. type Generator interface { // Meta returns meta information about this generator, including a name, // description, and a function to create instances of it. Meta() Meta // Tables returns the set of tables for this generator, including schemas // and initial data. Tables() []Table } Generator represents one or more sql query loads and associated initial data. FromFlags returns a new validated generator with the given flags. If anything goes wrong, it panics. FromFlags is intended for use with unit test helpers in individual generators, see its callers for examples. type Hooks struct { // Validate is called after workload flags are parsed. It should return an // error if the workload configuration is invalid. Validate func() error // PreLoad is called after workload tables are created and before workload // data is loaded. It is not called when storing or loading a fixture. // Implementations should be idempotent. // // TODO(dan): Deprecate the PreLoad hook, it doesn't play well with fixtures. // It's only used in practice for zone configs, so it should be reasonably // straightforward to make zone configs first class citizens of // workload.Table. PreLoad func(*gosql.DB) error // PostLoad is called after workload tables are created workload data is // loaded. It called after restoring a fixture. This, for example, is where // creating foreign keys should go. Implementations should be idempotent. PostLoad func(*gosql.DB) error // PostRun is called after workload run has ended, with the duration of the // run. This is where any post-run special printing or validation can be done. PostRun func(time.Duration) error // CheckConsistency is called to run generator-specific consistency checks. // These are expected to pass after the initial data load as well as after // running queryload. CheckConsistency func(context.Context, *gosql.DB) error // Partition is used to run a partitioning step on the data created by the workload. // TODO (rohany): migrate existing partitioning steps (such as tpcc's) into here. Partition func(*gosql.DB) error } Hooks stores functions to be called at points in the workload lifecycle. Hookser returns any hooks associated with the generator. type InitialDataLoader interface { InitialDataLoad(context.Context, *gosql.DB, Generator) (int64, error) } InitialDataLoader loads the initial data for all tables in a workload. It returns a measure of how many bytes were loaded. TODO(dan): It would be lovely if the number of bytes loaded was comparable between implementations but this is sadly not the case right now. var ImportDataLoader InitialDataLoader = requiresCCLBinaryDataLoader(`IMPORT`) ImportDataLoader is a hook for binaries that include CCL code to inject an IMPORT-based InitialDataLoader implementation. type JSONStatistic struct { Name string `json:"name,omitempty"` CreatedAt string `json:"created_at"` Columns []string `json:"columns"` RowCount uint64 `json:"row_count"` DistinctCount uint64 `json:"distinct_count"` NullCount uint64 `json:"null_count"` } JSONStatistic is copied from stats.JSONStatistic to avoid pulling in a dependency on sql/stats. func MakeStat(columns []string, rowCount, distinctCount, nullCount uint64) JSONStatistic MakeStat returns a JSONStatistic given the column names, row count, distinct count, and null count. type Meta struct { // Name is a unique name for this generator. Name string // Description is a short description of this generator. Description string // Details optionally allows specifying longer, more in-depth usage details. Details string // Version is a semantic version for this generator. It should be bumped // whenever InitialRowFn or InitialRowCount change for any of the tables. Version string // PublicFacing indicates that this workload is also intended for use by // users doing their own testing and evaluations. This allows hiding workloads // that are only expected to be used in CockroachDB's internal development to // avoid confusion. Workloads setting this to true should pay added attention // to their documentation and help-text. PublicFacing bool // New returns an unconfigured instance of this generator. New func() Generator } Meta is used to register a Generator at init time and holds meta information about this generator, including a name, description, and a function to create instances of it. Get returns the registered Generator with the given name, if it exists. Registered returns all registered Generators. MultiConnPool maintains a set of pgx ConnPools (to different servers). func NewMultiConnPool(cfg MultiConnPoolCfg, urls ...string) (*MultiConnPool, error) NewMultiConnPool creates a new MultiConnPool. Each URL gets one or more pools, and each pool has at most MaxConnsPerPool connections. The pools have approximately the same number of max connections, adding up to MaxTotalConnections. func (m *MultiConnPool) Close() Close closes all the pools. func (m *MultiConnPool) Get() *pgx.ConnPool Get returns one of the pools, in round-robin manner. func (m *MultiConnPool) PrepareEx( ctx context.Context, name, sql string, opts *pgx.PrepareExOptions, ) (*pgx.PreparedStatement, error) PrepareEx prepares the given statement on all the pools. type MultiConnPoolCfg struct { // MaxTotalConnections is the total maximum number of connections across all // pools. MaxTotalConnections int // MaxConnsPerPool is the maximum number of connections in any single pool. // Limiting this is useful especially for prepared statements, which are // prepared on each connection inside a pool (serially). // If 0, there is no per-pool maximum (other than the total maximum number of // connections which still applies). MaxConnsPerPool int } MultiConnPoolCfg encapsulates the knobs passed to NewMultiConnPool. Opser returns the work functions for this generator. The tables are required to have been created and initialized before running these. PgxTx is a thin wrapper that implements the crdb.Tx interface, allowing pgx transactions to be used with ExecuteInTx. Commit is part of the crdb.Tx interface. func (tx *PgxTx) ExecContext( ctx context.Context, sql string, args ...interface{}, ) (gosql.Result, error) ExecContext is part of the crdb.Tx interface. Rollback is part of the crdb.Tx interface. type QueryLoad struct { SQLDatabase string // WorkerFns is one function per worker. It is to be called once per unit of // work to be done. WorkerFns []func(context.Context) error // Close, if set, is called before the process exits, giving workloads a // chance to print some information. // It's guaranteed that the ctx passed to WorkerFns (if they're still running) // has been canceled by the time this is called (so an implementer can // synchronize with the WorkerFns if need be). Close func(context.Context) // ResultHist is the name of the NamedHistogram to use for the benchmark // formatted results output at the end of `./workload run`. The empty string // will use the sum of all histograms. // // TODO(dan): This will go away once more of run.go moves inside Operations. ResultHist string } QueryLoad represents some SQL query workload performable on a database initialized with the requisite tables. SQLRunner is a helper for issuing SQL statements; it supports multiple methods for issuing queries. Queries need to first be defined using calls to Define. Then the runner must be initialized, after which we can use the handles returned by Define. Sample usage: sr := &workload.SQLRunner{} sel:= sr.Define("SELECT x FROM t WHERE y = $1") ins:= sr.Define("INSERT INTO t(x, y) VALUES ($1, $2)") err := sr.Init(ctx, conn, flags) // [handle err] row := sel.QueryRow(1) // [use row] _, err := ins.Exec(5, 6) // [handle err] A runner should typically be associated with a single worker. func (sr *SQLRunner) Define(sql string) StmtHandle Define creates a handle for the given statement. The handle can be used after Init is called. func (sr *SQLRunner) Init( ctx context.Context, name string, mcp *MultiConnPool, flags *ConnFlags, ) error Init initializes the runner; must be called after calls to Define and before the StmtHandles are used. The name is used for naming prepared statements. Multiple workers that use the same set of defined queries can and should use the same name. The way we issue queries is set by flags.Method: - "prepare": we prepare the query once during Init, then we reuse it for each execution. This results in a Bind and Execute on the server each time we run a query (on the given connection). Note that it's important to prepare on separate connections if there are many parallel workers; this avoids lock contention in the sql.Rows objects they produce. See #30811. - "noprepare": each query is issued separately (on the given connection). This results in Parse, Bind, Execute on the server each time we run a query. - "simple": each query is issued in a single string; parameters are rendered inside the string. This results in a single SimpleExecute request to the server for each query. Note that only a few parameter types are supported. StmtHandle is associated with a (possibly prepared) statement; created by SQLRunner.Define. func (h StmtHandle) Exec(ctx context.Context, args ...interface{}) (pgx.CommandTag, error) Exec executes a query that doesn't return rows. The query is executed on the connection that was passed to SQLRunner.Init. See pgx.Conn.Exec. func (h StmtHandle) ExecTx( ctx context.Context, tx *pgx.Tx, args ...interface{}, ) (pgx.CommandTag, error) ExecTx executes a query that doesn't return rows, inside a transaction. See pgx.Conn.Exec. Query executes a query that returns rows. See pgx.Conn.Query. QueryRow executes a query that is expected to return at most one row. See pgx.Conn.QueryRow. QueryRowTx executes a query that is expected to return at most one row, inside a transaction. See pgx.Conn.QueryRow. func (h StmtHandle) QueryTx( ctx context.Context, tx *pgx.Tx, args ...interface{}, ) (*pgx.Rows, error) QueryTx executes a query that returns rows, inside a transaction. See pgx.Tx.Query. type Table struct { // Name is the unqualified table name, pre-escaped for use directly in SQL. Name string // Schema is the SQL formatted schema for this table, with the `CREATE TABLE // <name>` prefix omitted. Schema string // InitialRows is the initial rows that will be present in the table after // setup is completed. Note that the default value of NumBatches (zero) is // special - such a Table will be skipped during `init`; non-zero NumBatches // with a nil FillBatch function will trigger an error during `init`. InitialRows BatchedTuples // Splits is the initial splits that will be present in the table after // setup is completed. Splits BatchedTuples // Stats is the pre-calculated set of statistics on this table. They can be // injected using `ALTER TABLE <name> INJECT STATISTICS ...`. Stats []JSONStatistic } Table represents a single table in a Generator. Included is a name, schema, and initial data. Package workload imports 30 packages (graph) and is imported by 67 packages. Updated 2019-11-12. Refresh now. Tools for package owners.
https://godoc.org/github.com/cockroachdb/cockroach/pkg/workload
CC-MAIN-2019-47
refinedweb
2,453
50.12
Working With Sencha App Templates: Boilerplate Ext JS 6 code. Software in a day Maybe you have experienced this before. You meet with a client (or worse your boss 😉 ), he explains what kind of application he wants, and then he asked you the big question: How long will it take?. Being realistic (and add a little more time on top of that), you say: "a month!". "A month? It's just an application with 3 screens. I was thinking tomorrow." You know, software in a day. It should have been done yesterday. Now making software for yesterday that's impossible. But software within a day, is possible. Maybe you already had a preview at SenchaCon or one of the SenchaCon Roadshows, something else what's new in Ext JS 6 are app templates. This is boilerplate code, you can use, to quick start with developing full (enterprise) applications. And wow! It does look pretty. These app templates are full code examples, far beyond the current kitchensink examples. It's code written with best practice code (because originally it was written with Sencha Architect) and it is responsive. These templates make use of the new Triton Ext JS 6 theme, and therefore they are highly customisable, have a flat design, and is making use of icon fonts for all icons. (so no additional image requests). Currently this app template is only available for the classic toolkit, (so ideally you would use it for desktop/tablet apps) but eventually we will also have templates for the modern toolkit. You can find the app template, in the template folder of your Ext JS 6 sdk. Preview the Dashboard - App Template: Now to show you, how easily I created a custom application with the app template, here's a little tutorial. I used these steps to create the FitDashboard app. (A jawbone up / fitness mashup application). Take over the template Generate app with the same namespace as the app template: sencha generate app Admin ../my-path Navigate to the ext-6/template/admin-dashboard folder, and copy over all the contents to your my-path folder Run: sencha app refreshto refresh the bootstrap.json. Make a sencha build: sencha app build You can now run your application in the browser. How To Modify The Template - The app template is not available yet for the modern toolkit. Therefore let's disable the modern toolkit, in index.html. (You can remove the folders in the modern toolkit if you want.) Comment out: //else if (s.match(/\bmodern\b/)) { // profile = 'modern'; //} - Open the app/store/NavigationTree.js file, and remove the pages from the menu, which you don't need. When you want, you can remove those view directies from the view folder, and the data, model, store and sass/var & sass/src folders. The way how I did it; I created a _temp folder, where I moved all the classes too, that I won't need. Then I created more subfolders for my own data. And run a sencha app refresh. To remove the logo from the header, open Viewport.js You can modify the logo from the component with the itemId: headerBar The styling for this logo bar, can be found in classic/sass/src/view/main/Viewport.scss. - You can modify the sencha-logo class. Afterwards build the theme. The overall base color, can be set in classic/sass/var/view/Viewport.scss &base-color. Make sure you build (or watch) the theme. Once your happy, it's time to modify the data. You can find the data in the app/data folder. I would start with creating my own subfolders, and remove everything else to a _temp folder, since that way it will be easier later on, the remove all the data you are not using. - app/store/NavigationTree.js - Contains the store with the menu items. You can add new ones here. Note the viewTypewhich uses the widget alias (view xtype). - app/view/, classic/view/, modern/view - Here you can add new views, viewcontrollers and viewmodels. - Do a sencha app refreshbefore testing See the screenshot how my application finally looks. I created this, together with the data (which probably took the most time), within a day. I think that's really fast! And you can do this too! EDIT: It's now, also possible to generate an application based on an app template.
https://www.leeboonstra.com/developer/tag/boilerplate/
CC-MAIN-2017-43
refinedweb
731
66.84
Build Your Own Internet Radio Receiver Tune. In particular, I used the site ThinkWiki to research the Linux support of Thinkpads. On eBay, I found that the least expensive units often were sold without HDD—which is just fine with me, since I wanted a small SSD to keep the computer (whose main tasks is audio) quiet. I settled on a Thinkpad X61, but any notebook from that era will have more than enough oomph, and generally much more than any low-cost single-board computer option. I wanted an optical audio link, a TOSLINK, and again, ThinkWiki is an excellent resource for looking into issues like driver support. I went with a used Soundblaster Audigy Cardbus sound card (because the system also doubles as an audio server for my FLAC recordings), which was a bit more pricey, but to save a few bucks, you can pick up a USB to TOSLINK converter on eBay for ~$10. My fondness for TOSLINK is due to its inherent electrical isolation that minimizes the chance of any audio hums from ground loops. And heck, I just think communicating by light is cool. The other big piece of hardware is the keypad. To prototype, I just grabbed a wireless numeric pad with 22 keys, but for the final project, I spent a little more for a dedicated 48 key pad (Figure 2). The wireless keypad, of course, has the advantage that it can act as a remote control I can carry around the room when switching stations. Figure 2. Dedicated 48 Key Pad After getting all the pieces together, the next step is to install your favorite distro. I went with Linux Mint, but I'll probably try elementary for the next iteration. The main piece of code is pyradio, which is a Python-based internet radio turner. The install is simple with snap: $ sudo apt install snapd $ sudo snap install pyradio You'll also need a media player, such as VLC or MPlayer. I always need to look for where stuff gets dropped, for that I use: $ cd / $ sudo find . -name pyradio In this case, I found the executable at /snap/bin/pyradio. Like Music on Console (MOC), pyradio is a curses-based player. I find myself reverting to curses interfaces these days for a few reasons: nostalgia, simplicity of programming and an attempt to shake free of the ever-more clogged browser control interfaces that once held the promise of a universal portal, but have since become bogged down with push "services"—that is, advertising. If you have not done any previous curses programming, check out the recent example provided by Jim Hall of using the ncurses library in Linux Journal. Take a look at the pyradio GitHub repository if you run into any installation issues. You also can build pyradio from source after cloning the repository with the commands: $ python setup.py build $ python setup.py install You don't really need to know much, if any, Python beyond the two simple commands above to get running from source. Also, depending on your setup, you may need to use sudo with the commands above. If all goes well, and after adding /snap/bin to your path, issuing the command: $ pyradio will bring up a screen like that shown in Figure 3. Figure 3. pyradio Screenshot You drive pyradio with a few keyboard-based commands like the following: - Up/Down/j/k/PgUp/PgDown — change station selection. - g — jump to first station. - <n>G — jump to nth/last station. - Enter/Right/l — play selected station. - r — select and play a random station. - Space/Left/h — stop/start playing selected station. - -/+ or ,/. — change volume. - m — mute. - v — save volume (not applicable for vlc). - o s R — open/save/reload playlist. - DEL,x — delete selected station. - ? — show keys help. - Esc/q — quit. Some of those commands will change after you do the keypad mapping. Next, you'll want to add your own station list to the mix. For that, I search for the file stations.csv with the command: $ sudo find . -name stations.csv And see that snap put the file at: $ /home/[user_id]/snap/pyradio/145/.config/pyradio/stations.csv Open stations.csv with an editor and replace the default stations there with your own selection. For instance, some of my entries look like this: KMUZ, KVMR, RNZ1, etc ... The syntax is as straightforward as it looks. The field separator is a comma, and the first field is any text you want, presumably describing the station. I just use the call sign. The second field is a link to the stream. And this is where you face the first obstacle. Finding all the streams you want can be a bit tedious, particularly if you want to go directly to the source and not a secondary link from an aggregator website. Also, once upon a time, there were just a few encoding formats (remember .ram?), but now there are a multitude of formats and proprietary services. So identifying a good URL for the stream can be a bit of a challenge. I start by going directly to the station's website, and if you are lucky, it will provide the URL for the given stream. If not, you need to do a bit of hunting. Using the Google Chrome browser, pull up the page View→Developer→Developer Tools. On the left part of the screen is the web page and on the right are a few windows for Developers. Click the menu labeled Network, and then start the audio stream. Under the "Network" window, step through the column labeled "Name". You should see the "Request URL" appear on the right, and you want to take notice of any link that could lead to the audio stream. It will be the one with a lot of packets bouncing to and fro. Copy the URL (and the IP number at "Request IP"), and then test it out by pasting the URL or IP:PORT number into the address box in the a browser. The URL might cause the start of the audio stream, or it might lead to a file that contains information—like a Play LiSt File (.pls file)—used to identify the stream. For a specific example, consider the KMUZ (a community radio station in Turner, Oregon). I first go to KMUZ's home page at the URL, KMUZ.org. I note the "Listen Live" button on the home page, but before running the stream, I open the "Network" window in "Developers Tools". When that window is open, I click the "Listen Live" button, and search through the names in Requested URLs and see with IP number and port, 70.38.12.44:8010. Pasting either of those identifiers into the URL box of the browser, I find the stream is from (a proprietary service) Shoutcast, which provides a Play LiSt file (.pls). I then open the playlist file with an editor (.pls are ascii files) to confirm that the IP/Port is the stream for listening to KMUZ. Note two things. First, there are a lot formats/protocols in use to create a stream. You might find an MP3 (.mp3) file during your hunting, a multimedia playlist file (.m3u), an advanced audio encoding (.aac) or just a vanilla URL. So, getting a link to the stream you want requires some hunting and pecking. Second, if there is a preamble to the stream, you can usually avoid that by waiting for the stream to pass to the live broadcast, and then grab the live stream. That way, you won't need to listen to the preamble the next time you start the station. Your choice of audio player (VLC, MPlayer or similar) needs to be able to decode whatever formats you end up with for your particular group of radio stations. The other place you might run into difficulties is mapping the keys on the keypad. That, of course, depends on the specific keypad you use. If you are lucky, the keypad is documented. If not, or to double-check the map, use a program to capture the keycodes as you press each key. Here is a Python program to find the keycodes: from msvcrt import getch while True: print(ord(getch())) The other small piece of coding you need to do is point each keypress to a station. Locate the radio.py program in the same directory as the stations.csv. Edit the Python script so that each keypress causes the desired action. For instance, the streams in the station.csv are indexed by pyradio from 1 to N. If the first station in the list is KMUZ, and the keycode for the key you want to use is "h", then add or modify the radio.py script to include the snippet: if char == ord('h'): self.setStation(1) self.playSelection() self.refreshBody() self.setupAndDrawScreen() The functions/methods you will use are clearly labeled, such as the playSelection method above. So you really don't need any detailed knowledge of Python to make these changes. Make sure though that any changes do not conflict with other assignments of the keycode within the script. Functions, such as "mute", can be reassigned with the snippet: if char == ord('m'): self.player.mute() return Whatever changes you make though, try to keep the program usable from the notebook keyboard, so you still can do basic operations without the external keypad. And that's just about it. Every good program, however, should have one kludge so as not to offend the gods. I wanted the pyradio program to run automatically after booting, and for that, I put a ghost in the machine. There are more natural ways to run pyradio at boot, but I like a rather spooky way using a shell script at login with xdotool: sleep 0.2 xdotool mousemove 100 100 click 1 xdotool type "pyradio" xdotool key KP_Enter xdotool lets you script keyboard and mouse inputs to run as if you were actually typing from the keyboard. It comes in quite handy for curses programs. Finally, I would be remiss if I didn't recommend a good radio show. My favorite at the moment is Matinee Idle on Radio New Zealand National, which plays a few times a year during holidays. It's like College Radio for the over 50 set.
https://www.linuxjournal.com/content/build-your-own-internet-radio-receiver
CC-MAIN-2020-40
refinedweb
1,728
72.76
Pc red orchestra 2 heroes of stalingrad reloaded with crack not working If you are useing deamon tool, uled by coronagol90: 0: 0. All driverpacks. Windows 8. 05mb pattern maker prome 4. Native instruments komplete 9 software bundle. Sketchup is useful from the earliest stages of design to the end of construction. iobit uninstaller searches your pc for installed. 6 microsoft visual c 6. 620 adobe acrobat professional 7. Bundle. Avg tuneup utilities pc 2 jahre vollversion pc tuneup de eu tune up neu last updated on 13 mar, including all vegetation. La anterior barrera se ha ido eliminando y es posible para cualquier usuario de nivel bsico el utilizar relodaed xp x64 la condicin desde luego. uploaded, il, games. Windows xp x64 edition also includes both 32 bit and 64 bit versions of internet explorer 6, os. Gcover? Ftdi provide support for our own page which contains some useful links to resources on writing communications applications under windows and os x. 9 x86x64. Coreldraw graphics suite x8 crack keygen full with keygen free coreldraw x8 crack 2020 serial key coreldraw graphics suite x8 universal keygen traditional. Outlook express 6! cloud based email. The dalai lama for always being such a wroking source of inspiration translator. shop with confidence. 3 mb. Simatic hmi wincc v6. Program. Serial keys search. Problem with photoshop cs3 and 3dxsoftware v i installed the plugin, was the successor to the p6 microarchitecture in the x86 family of. french. Through this dvd player software you birds of new mexico book play all regions dvd on all dvd drives toshiba satellite serials even if you have changed the region code 5 times. Les dieux sont encore tombes sur la tete 1989 cette fois ce nest pas une bouteille de coca cola qui tombe sur la tete de xixo mais une avocate new yorkaise dont. populaire serif serif photoplus x5! heries 12 02page 01 of 03. Mac superbundle reloading roxio toast9 bestselling mac apps for only 49 wprking best selling roxio toast 11 with features red new user interface thats approximately 1 gb dell latitude d600 system restore disk download gb for pro of crack space to install all components including 6 new heroes from world renowned withs and built orchstra support for ftp? Not symantec antivirus corporate edition. Quick heal dnascan technology is now enhanced to combine red and version: working bot internet security. Icare data recoverya orchestra to hero reloaded and. import your beatinstrumental. In this a z of free photoshop plugins and filters you will find a few plugins that are font awesome icons right inside adobe photoshop. Microsoft windows 7 premium support microsoft windows 7 home premium support providing workingg help and advice for microsoft windows 7. Victory multimedia 5 wiebetech 5 avanquest 4 space. Bra pichunter required exercise for adults nude black pussy black teen hollie. And,000 gp currency in the myclub starter kit, 1. Up to 200 off select surface pro 4 devices. Root verizon galaxy s4 sch i545 on ng6 and nk1 firmware verizon galaxy s4 vrudmi1 rooted ala root de la wor,ing by beanstown6. Add. 3d album commercial suite 3. Quick heal internet security protects your witu and desktops and provides protection against all quick heal internet security 2020 crack plus serial key. Secretcity vbpatchrar? pro535 firmtools. Sap training course manuals exe: linkedinfluence the ultimate linkedin training course exe: cbt nuggets mcse windows xp training course cd1 of 2. Size: 463mb format: workinv sprache: eng system: win xp78. How do i register an account with softonic? Rakenho. 1 vso image resizer. Remove uninstall windows 7 manager completely from windows locate windows 7 manager or yamicsoft and click "changeremove" to. Hp performance advisor. 1 build 67 multilingual bootcd. 6 with serial key free. according to microsoft, garrys mod is 1 mod rooted valve source engine and adventure. incl 0dayapps "bricsys! Format factory is usually a totally free audio tracks, free programming tools. Forexindicator. Img pinnacle studio ultimate cho php stalingrd tn dng li th ca cc cng ngh mi nht ca video hd, resume and run "32bit patch build 11! Microsoft office professional plus 2020 for windows. This info give me a good point to crack a excel password, de una forma orchestga profesional y rpida. exe.
http://enintrot.webcindario.com/behub/pc-red-orchestra-2-heroes-of-stalingrad-reloaded-with-crack-not-working.php
CC-MAIN-2020-50
refinedweb
708
57.67
Hi All, I need to replace Perl's built-in open() function. The reason I want to do this is described in detail in another recent post of mine. In a nutshell, I have to do encoding conversions on filename arguments (unicode -> CP932). Actually, it's not only open() I need to wrap, but open() appears to be the most flexible beast of those, and I'd very much appreciate if some of you wise monks could take a look at what I currently have, and let me know if I've overlooked something... _____ I18N/Japanese.pm (the 'compatibility' module) _____ package I18N::Japanese; use Encode 'encode'; use Symbol 'qualify_to_ref'; # eventually, determine this dynamically my $encoding = "cp932"; # this is meant to take effect for whoever uses us require encoding; encoding->import($encoding); # override/wrap Perl built-ins that take or return filenames # (... snippage of all but open()-wrapper) *CORE::GLOBAL::open = sub (*@) { my $fhref = \$_[0]; my $autov = !defined $_[0]; my $fh = qualify_to_ref(shift, scalar caller); # pass filehandle up to caller when "open my $f, ..." $$fhref = $fh if $autov; my ($arg2, $arg3, @args) = convert_encoding(@_); # need to handle the different prototypes seperately if (@_ >= 3) { CORE::open $fh, $arg2, $arg3, @args; } elsif (@_ == 2) { if (defined $arg3) { CORE::open $fh, $arg2, $arg3; } else { # must be undef _syntactically_ CORE::open $fh, $arg2, undef; } } elsif (@_ == 1) { CORE::open $fh, $arg2; } else { CORE::open $fh; } }; sub convert_encoding { return ( map ref(\$_) eq 'SCALAR' ? encode($encoding, $_) : $_, @_ + ); } 1; [download] _____ using the replaced open() _____ use I18N::Japanese; open F, ">", "myfile" and print F "foo\n"; open my $f, ">", "myfile" or die $!; print $f "foo\n"; # ... [download] I believe this code is able to handle all various usages of open() ... but please don't hesitate to prove me wrong ;) Otherwise, well, I'd be glad to share this snippet with whoever in need might google this up in the future. (Note: the encoding conversion aspect is not what I'm worried about at the moment, but rather whether the replaced open() is still behaving like the built-in one, interface-wise) Thanks, Almut BTW, what operating Windows you have..?Err, what system operates your windows?what is your $^O? :) All in all, you'll better off overriding 'open' for such purposes, the danger do not worth the risk! If your $^O is MSWin32, then following my advice could help: Re: enumerate windows unicode filenames HTH I deliberately chose to use CORE::GLOBAL in my specific case (despite the unless-you-know-what-you're-doing type of warnings in "Overriding Built-in Functions", Chap 11.3, the Camel Book). Only having a vague idea of what the existing code1 looks like, I thought that, all in all, I might be better off replacing the built-ins globally. As I understand it, Exporter does only export into a specific namespace, e.g. _____ MySystem.pm _____ package MySystem; use Exporter 'import'; @EXPORT = qw(system); sub system { print "wrapped system(): @_\n"; } 1; [download] _____ test.pl _____ #!/usr/bin/perl use MySystem; system "echo foo"; package SomeOtherModule; system "echo bar"; [download] would print $ ./test.pl wrapped system(): echo foo bar [download] i.e. the second call of system() is not being wrapped... I'd rather not have to take care of such subleties (not all that sure I'm not getting myself into other subtleties this way, though... ;) Could you elaborate on why not to use CORE:: ? Almut _____ 1 as I mentioned in Using literal Japanese filenames in legacy CP932 encoding with system(), etc., the idea behind writing a jperl compatibility module is that the large number of existing scripts wouldn't need to be
http://www.perlmonks.org/?node_id=580833
CC-MAIN-2015-06
refinedweb
616
59.74
First, subclass NSArrayController in Interface Builder. Nmae it something like MyController. Also, create the files for the new class in your project. Next, select the BookController instance that was created before and change its 'custom class' to MyController. Go back to Xcode. Add the following line to MyController.h (it doesn't really matter, but I got a compiler warning otherwise): #include "Book.h" Now, override the -(id)newObject: method in MyController.m as follows: - (id)newObject { id newBook = [super newObject]; [newBook setTitle: @"Unknown Title"]; [newBook setAuthor: @"Unknown Author"]; return newBook; } Just drag MyController.h into IB, save everything and compile! © 2014, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://archive.oreilly.com/cs/user/view/cs_msg/41580?page=last&x-maxdepth=0
CC-MAIN-2014-52
refinedweb
128
53.88
Hey peoples, I'm an absolute beginner with Unity and Vuforia. For my Bachelor thesis, I'm working on an AR Application augmenting a physical 2D Map. For filtering the content I tried to use some 3D buttons on the side which execute different filter options. On youtube, I found several videos to setup raycast to track model hits through touch interaction. For testing, i copied the code from the video. I attached the script as he does in his video () to my image target. My 3D Button (Cube) is a child of it with a mesh collider on it. Whatever I do my camera / android device does not detect any hits at all. For reasons I never get into the needed part of the function cause (Physics.Raycast(ray, out hit) never returns true... I am absolutely clueless to what else I could do to make this work. I`ve already tried each advice I found on several forums and or videos. This is the code I use: using UnityEngine; using UnityEngine.Events; using UnityEngine.EventSystems; public class buttonController : MonoBehaviour { public AudioClip[] audioClips; public AudioSource myAudioSource; string btnName; private void Start() { myAudioSource = GetComponent<AudioSource>(); } private void Update() { if (Input.touchCount > 0 && Input.touches [0].phase == TouchPhase.Began) { Ray ray = Camera.main.ScreenPointToRay(Input.GetTouch(0).position); RaycastHit hit; if (Physics.Raycast(ray, out hit)) { btnName = hit.transform.name; switch (btnName) { case "buttonPOI": myAudioSource.clip = audioClips[0]; myAudioSource.Play(); break; } } else { Debug.Log("NOTHING HAPPENED"); } } } } I would appreciate it so much if anyone has a clue or hint. Regards Stephan EDIT: Ok i got the things working in a complete new Project with nothing in it but the elements and scripts he used in his video. But its not working in the project ive set up for my BA although the only difference is that i use 9 imagetargets instead of 1 and several 3D model children in each of them instead of only the Button element. (see attached img) For now im trying to make it work for Imagetarget 1_1 only. Ive put the exact script on the exact same element as i did in den new project and everything else should be exactly the same aswell.
https://developer.vuforia.com/forum/model-targets/no-raycasthit-cube-target-detected
CC-MAIN-2020-05
refinedweb
368
67.15
User Tag List Results 1 to 3 of 3 - Join Date - Mar 2009 - 4 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) instance variables and test functionals I have the following in my functional test file. class UserControllerTest < ActionController::TestCase fixtures :users def setup @controller = UserController.new @request = ActionController::TestRequest.new @response = ActionController::TestResponse.new @invalid_user = user invalid_user) @valid_user = users valid_user) end def test_login_success @valid_user.screen_name end When I run a test that tries to use the @valid_user variable I get the following error. NoMethodError: You have a nil object when you didn't expect it! The error occurred while evaluating nil.screen_name It seems that this class isn't storing the instance variables in memory. Any ideas. Thanks in advance Mitch I just found the answer. Cryptic little bugger. t's worth noting that R2.0.2 subclasses ActionController::TestCase for functional tests (i.e. UserControllerTest), not ActiveSupport::TestCase. You need to explicitly change the skeleton test files for (user|spec|etc)_controller_test.rb files to subclass ActiveSupport::TestCase. Last edited by mitchai; Apr 6, 2009 at 08 - Join Date - Mar 2009 - 4 - Mentioned - 0 Post(s) - Tagged - 0 Thread(s) There they have some updates etc and a listing for the google group for RailsSpace. On that site I found someone else who had the same problem and posted the solution. I hope this helps. Bookmarks
http://www.sitepoint.com/forums/showthread.php?609505-instance-variables-and-test-functionals
CC-MAIN-2016-40
refinedweb
228
60.61
Kubernetes Multi TLS certificate termination with Nginx Ingress What is Multiple TLS certificate termination? Lets say if we want to use multiple domains using individual TLS/SSL certificates. For example, you have certificate A for *.amralkar.pvt and certificate B for *.abhishekamralkar.pvt. Load Balancer uses Server Name Indication (SNI) to return the certificate to the client request, based on the DNS name. If the DNS name don’t match it will fall back to default K8s SSL. What is Kubernetes ingress? Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. Some of the Ingress available are Lets Begin Prerequisites: - SSL Certificates - Certificates Private Keys - We will assume you are running Nginx Ingress in Kubernetes cluster to route traffic to pods. Creating Kubernetes TLS Secrets: With Kubernetes secrets we can store and manage sensitive information, such as passwords, tls , and ssh keys. Its always good to use Kubernetes secrets rather than putting them into Pod definition files. Kubernetes Secrets are, by default, stored as unencrypted base64-encoded strings. By default they can be retrieved — as plain text — by anyone with API access, or anyone with access to Kubernetes’ underlying data store, etcd. Kubernetes Secrets aren’t only or the most secured way to store secrets in Kubernetes. base64 Conversion: - Convert the SSL certificate file to base64 cat yourSSL.crt | base64 - Convert the SSL private key file to base64 cat yourSSLKey.crt | base64 Above command will generate a base64 format output. Copy the output and update the Kubernetes tls secrets file.Below is how your Kubernetes TLS secrets file should like like. The output of above commands should be updated in the file respectively. Depending upon the number of SSL certs you can create Kubernetes secrets. Note:- Make sure your Private Key is not encrypted Once you are ready create the secrets kubectl apply -f yourSSLSecret.yaml -n namespace Update Ingress: Below is the example Ingress file Update the tls section and add the host and respective secretName. This secret must exist beforehand. The cert must also contain the subj-name api.abhishekamralkar.pvt and api.amralkar.pvt. Add the host and relative information in rules section. Once done create the ingress. kubectl apply -f yourIngress.yaml -n myapp And you should be able to reach your nginx service or service using a hostname switch. To verify the you can use below command curl -k Source Article: by the author.
https://abhishekamralkar.medium.com/kubernetes-multi-tls-certificate-termination-with-nginx-ingress-4504e9c74f5f?source=post_internal_links---------0-------------------------------
CC-MAIN-2022-21
refinedweb
416
58.79
2005/11/21, g. <the_ether at lycos.co.uk>: > I am getting a segmentation fault in ByteSwap32 of bswap.h > > All I did was exchange the diff_pixels_mmx routine with a new one and now I > get the segemntation fault in a totally unrelated area. On my test sequence it > occurs in the 7th frame, so all is okay for over 8,000 thousand blocks. > > It occurs from put_bits being called from mpeg1_encode_block and occurs on > block number 5 of the macroblock. > > It is either a subtle bug with ByteSwap32 or a compiler / linker problem. I > changed the optimisation level down to 1 and deleted the frame pointer removal > option but that didn't fix it. The new pixel_diff routine I am using is being > linked in from a Windows library but if there was some weird linker error I > doubt I would have managed to get through 8,000+ blocks before encountering a > problem that resulted from bad linkage. > > The src of the ffmpeg code I crash in is: > > static inline uint32_t ByteSwap32(uint32_t x) > { > #if __CPU__ > 386 > __asm("bswap %0": > "=r" (x) : > #else > __asm("xchgb %b0,%h0\n" > " rorl $16,%0\n" > " xchgb %b0,%h0": > LEGACY_REGS (x) : > #endif > "0" (x)); > return x; > } > > I am using a P4 HT and gcc version 3.4.2 under MinGW. If I understand you right you haven't changed the ByteSwap32 function, so it is pointless to paste it.. Ivan
http://ffmpeg.org/pipermail/ffmpeg-devel/2005-November/004547.html
CC-MAIN-2016-50
refinedweb
237
68.3
Mapping Specification This chapter describes the mapping between .NET objects and the Caché proxy classes that represent the .NET objects. Only classes, methods, and fields marked as public are imported. This chapter describes mappings of the following types: Assembly and Class Names Assembly and class names are preserved when imported, except that each underscore (_) in an original .NET class name is replaced with the character u and each dollar sign ($) is replaced with the character d in the Caché proxy class name. Both the u and the d are case-sensitive (lowercase). Primitives Primitive types and primitive wrappers map from .NET to Caché as shown in the following table. Properties The result of importing a .NET class is an ObjectScript abstract class. For each .NET property that does not already have corresponding getter and setter methods (imported as is), the .NET Gateway engine generates corresponding Object Script getter and setter methods. It generates Setters as setXXX, and getters as getXXX, where XXX is the property name. For example, importing a .NET String property called Name results in a getter method getName() and a setter method setName(%Library.String). The Gateway also generates set and get class methods for all static members. Methods After you perform the .NET Gateway import operation, all methods in the resulting Caché proxy class have the same name as their .NET counterparts, subject to the limitations described in the Method Names section. They also have the same number of arguments. The type for all the Caché proxy methods is %Library.ObjectHandle(). The .NET Gateway engine resolves types at runtime. For example, the .NET method test: public boolean checkAddress(Person person, Address address) is imported as: Method checkAddress(p0 As %Library.ObjectHandle, p1 As %Library.ObjectHandle) As %Library.ObjectHandle Overloaded Methods While Caché Basic and ObjectScript do not support overloading, you can still map overloaded .NET methods to Caché proxy classes. This is supported through a combination of largest method cardinality and default arguments. For example, if you are importing an overloaded .NET method whose different versions take two, four, and five arguments, there is only one corresponding method on the Caché side; that method takes five arguments, all of %ObjectHandle type. You can then invoke the method on the Caché side with two, four, or five arguments. The .NET Gateway engine then tries to dispatch to the right version of the corresponding .NET method. While this scheme works reasonably well, avoid using overloaded methods with the same number of arguments of similar types. For example, the .NET Gateway has no problems resolving the following methods: test(int i, string s, float f) test(Person p) test(Person p, string s, float f) test(int i) However, avoid the following: test(int i) test(float f) test(boolean b) test(object o) For better results using the .NET Gateway, use overloaded .NET methods only when absolutely necessary. Method Names Caché has a limit of 31 characters for method names. Ensure your .NET method names are not longer than 31 characters. If the name length is over the limit, the corresponding Caché proxy method name contains only the first 31 characters of your .NET method name. For example, if you have the following methods in .NET: thisDotNetMethodHasAVeryLongName(int i) // 32 characters long thisDotNetMethodHasAVeryLongNameLength(int i) // 38 characters long Caché imports only one method with the following name: thisDotNetMethodHasAVeryLongNam // 31 characters long The .NET reflection engine imports the first one it encounters. To find out which method is imported, you can check the Caché proxy class code. Better yet, ensure that logging is turned on before the import operation. The .NET Caché side, the method is not imported. Finally, Caché class code is not case-sensitive. So, if two .NET method names differ only in case, Caché only imports one of the methods and writes the appropriate warnings in the log file. Static Methods Caché projects .NET static methods as class methods in the Caché proxy classes. To invoke them from ObjectScript, use the following syntax: // calls static .NET method staticMethodName(par1,par2,...) Do ##class(className).staticMethodName(gateway,par1,par2,...) Constructors You invoke .NET constructors by calling %New(). The signature of %New() is exactly the same as the signature of the corresponding .NET constructor, with the addition of one argument in position one: an instance of the .NET Gateway. The first thing %New() does is to associate the proxy instance with the provided Gateway instance. It then calls the corresponding .NET constructor. For example: // calls Student(int id, String name) .NET constructor Set Student=##class(gateway.Student).%New(Gateway,29,"John Doe") Constants The .NET Gateway projects and imports .NET static final variables (constants) as Final Parameters. It preserves the names when imported, except that it replaces each underscore (_) with the character u and each dollar sign ($) with the character d. Both the u and the d are case-sensitive (lowercase). For example, the following static final variable: public const int DOTNET_CONSTANT = 1; is mapped in ObjectScript as: Parameter DOTNETuCONSTANT As INTEGER = 1; From ObjectScript, access the parameter as: ##class(MyDotNetClass).%GetParameter("DOTNETuCONSTANT")) OUT and REF Parameters The .NET Gateway supports passing parameters by reference, by supporting the .NET OUT and REF parameters. Only objects may be used as OUT and REF parameters; scalar values are not supported. For this convention to work, you must preallocate a temporary object of the corresponding type. Then call the method and pass that object by reference. The following are some examples: public void getAddressAsReference(out Address address) To call this method from ObjectScript, create a temporary object; there is no need to set its value. Then call the method and pass the OUT parameter by reference, as follows: Set tempAddress=##class(remote.test.Address).%New(gateway) Do student.getAddressAsReference(.tempAddress) The following example returns an array of Address objects as an OUT parameter: void getOldAddresses(out Address[] address) To call the previous method from ObjectScript, use the following code: Set oldAddresses=##class(%ListOfObjects).%New(gateway) Do person.getOldAddresses(.oldAddresses) .NET Arrays Arrays of primitive types and wrappers are mapped as %Library.ListOfDataTypes. Arrays of object types are mapped as %Library.ListOfObjects. Only one level of subscripts is supported. The Gateway projects .NET byte arrays (byte[]) as %Library.GlobalBinaryStream. Similarly, it projects .NET char arrays (char[]) as %Library.GlobalCharacterStream. This allows for a more efficient handling of byte and character arrays. You can pass byte and stream arrays either by value or by reference. Passing by reference allows changes to the byte or character stream on the .NET side visible on the Caché side as well. For example, using the following: System.Net.Sockets.Stream.Read(byte[] buffer, int offset, int size) in .NET: byte[] buffer = new byte[maxLen]; int bytesRead = inputStream.Read(buffer,offset,maxLen); The equivalent code in ObjectScript: Set readStream=##class(%GlobalBinaryStream).%New() // we need to 'reserve' a number of bytes since we are passing the stream // by reference (DotNet's equivalent is byte[] ba = new byte[max];) For i=1:1:50 Do readStream.Write("0") Set bytesRead=test.read(.readStream,50) Write readStream.Read(bytesRead) The following example passes a character stream by value, meaning that any changes to the corresponding .NET char[] is not reflected on the Caché side: Set charStream=##class(%GlobalCharacterStream).%New() Do charStream.Write("Global character stream") Do test.setCharArray(charStream) Recasting ObjectScript has limited support for recasting; namely, you can recast only at a point of a method invocation. However, since all Caché proxies are abstract classes, this should be sufficient. .NET Standard Output Redirection The .NET Gateway automatically redirects any standard .NET output in the corresponding .NET code to the calling Caché session. It collects any calls to System.out in your .NET method calls and sends them to Caché to display in the same format as you would expect to see if you ran your code from .NET. To disable this behavior and direct your output to the standard output device as designated by your .NET code (in most cases that would be the console), set the following global reference in the namespace where the session is running: Set ^%SYS("Gateway","Remote","DisableOutputRedirect") = 1 Restrictions Rather than aborting import, the .NET Gateway engine silently skips over all the members it is unable to generate. If you repeat the import step with logging turned on, Caché records all skipped members (along with the reason why they were skipped) in the WARNING section of the log file. The .NET Gateway engine always makes an attempt to preserve assembly and method names, parameter types, etc. That way, calling an Caché proxy method is almost identical to calling the corresponding method in .NET. It is therefore important to keep in mind Caché Basic and ObjectScript restrictions and limits while writing your .NET code. In a vast majority of cases, there should be no issues at all. You might run into some Caché Basic or ObjectScript limits. For example: .NET method names should not be longer than 30 characters. You should not have 100 or more arguments. You should not try to pass String objects longer than 32K. Do not rely on the fact that .NET is case-sensitive when you choose your method names. Do not try to import a static method that overrides an instance method. The .NET Gateway cannot generate proxy classes for .NET generic classes. It similarly cannot import .NET classes with generic subclasses or subinterfaces. .NET Events are not supported — Caché code cannot be called from delegate notifications. For details on Caché Basic and ObjectScript naming conventions, see Variables in Using Caché ObjectScript, Naming Conventions in Using Caché Objects, Identifiers and Variables in Using Caché Basic, and Rules and Guidelines for Identifiers in the Caché Programming Orientation Guide.
https://docs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=BGNT_mapping
CC-MAIN-2021-17
refinedweb
1,619
59.8
javascript - Cast multiple Objects to an array Im looping through some collections (called categories in Shopify Liquid), and I need to cast all those collections into an array, so I can access their indexes. What Im doing now is this: {% for link in linklists[page.handle].links limit:1 %} {% assign collection = link.object %} // Im doing the above code somewhere above in the liquid file, thats where I get the link.object <script> var collections = {{ link.object | json }}; console.log(collections); </script> And this is the result I get: I need the result to be like this, in an array: How can I cast those set of objects to array like I have shown for the images below? /********** EDIT *******/ When I use Array.of(), like this: console.log(Array.of(collections)); I get this: But all those Objects are still not in an array. Maybe push it up one level? Answer Solution: Why are you initiating the collections variable inside the for loop? Try this <script>var collections = new Array</script> {% for link in linklists[page.handle].links limit:1 %} {% assign collection = link.object %} // Im doing the above code somewhere above in the liquid file, thats where I get the link.object <script> collections.push({{ link.object | json }}); console.log(collections); </script> {% endfor %} Answer Solution: not sure what you're trying to achieve but have a look at the Array.of method. Array.of({obj1Prop1: 'value1'}, { obj2Prop1: 'value2'}); nevertheless - it looks like your collections are actually a collection and you therefore you maybe looking for an array in a higher scope defined and just push / concat them together once you reach your code with your collection. Answer Solution: The object is probably more useful in most cases but you can so something like this: <script> var collections = { "collections": [{% for collection in collections %} { {% if collection.image %}"image": "{{ collection.image }}",{% endif %} "body_html": "{{ collection.description | url_escape }}", "handle": "{{ collection.handle }}", "id": {{ collection.id }}, "sort_order": "{{ collection.default_sort_by }}", "template_suffix": "{{ collection.template_suffix }}", "title": "{{ collection.title }}", "products_count": {{ collection.products_count }}, "url": "{{ collection.url }}", "products": [] }{% unless forloop.last %},{% endunless %}{% endfor %} ] } </script>.
https://e1commerce.com/items/cast-multiple-objects-to-an-array
CC-MAIN-2022-40
refinedweb
340
53.27
Allow me to preface my review by saying that I am not a disciple of Miyazaki. I have seen part of Spirited Away and have a definite urge to see more, but I have seen none of his classics in their entirety. In fact, I am an anime neophyte. My experience with the medium pretty much begins and ends with the fact that I've seen Akira. However, I can also say that, even to someone unfamiliar with Japanese animation, Ponyo is a visual treat that likely defies and exceeds the expectations you may have for a Disney-sponsored import about a magical transforming fish princess. Ponyo was written and directed by Hayao Miyazaki, a legend in the animation world who was also behind the 1997 import Princess Mononoke and the 2002 Academy Award winner, Spirited Away. The story of Ponyo is based in part on the Hans Christian Andersen fable The Little Mermaid. (Yes, the same one that Disney used to help re-launch the animated musical.) The titular Ponyo (Noah Cyrus) is a magical fish-girl who sneaks out of the care of her father, an underwater sorcerer voiced by Liam Neeson, who was once human but now tends the sea from his magical submarine. She finds her way toward land, where she ends up stuck in a bottle after a close call with a dredger, and is rescued by 5-year-old Sosuke (Frankie "Bonus" Jonas), with whom she instantly falls in love. She ends up being taken back to the sea by her father, but not before she gains the ability to transform completely into a human. I could say more, but the structure of Ponyo lends itself more to the mood of discovery and wonder as Ponyo learns about humans and their world, and Susuke and his mom, Lisa (Tina Fey), learn more about Ponyo. The charm of Ponyo is that it is drenched in a spirit of wonder, discovery, and general cuteness, not to mention the high-quality animation and beautifully imaginative imagery. Indeed the art and imagination of Ponyo are its highlights. Miyazaki creates a world where the fantastic and the realistic creatively, beautifully, and believably occupy the same space. For example, early in the movie, after Sosuke evades a series of waves (sent by Ponyo's father) that seemed to be coming right for him after finding Ponyo, he shrugs it off with, "That was weird." This is the first clue that the fantastic and strange are to be accepted in this world just as much as the mundane. From the fish-shaped water spirits Ponyo rides to land on, to old ladies who occupy the local senior center, everything is given equal opportunity to be a source of wonderment and interest. The equality in treatment between fantastic events and ordinary events ends up being a focus of the movie, which is interesting to look back on once you realize that you haven't been so engrossed watching the preparation of a meal since you were a kid. The wonder-at-the-world focus can catch you off guard, as the pace of the film is very relaxed. There is an issue that the main characters are working to resolve, but there's never a feeling of being in a huge rush to get to a solution. Sure the moon is moving closer to the earth and drawing the water level so high it's right outside the door of our cliff-side home, but we've also got ham and instant noodles, and that's pretty important, too. The sole complaint about the movie would be that the ending just fizzles out. There's no big conflict, event, or test like what you might usually expect from an animated movie. Instead, if you're not paying attention, you may miss exactly how they put the order of the natural world back into shape. I suppose this should be expected in a movie where everything from making tea to transformative magic is treated with the same amount of gravitas. Despite its few flaws, Ponyo is an engaging, entertaining movie that doesn't need to get you worked up to get you engrossed. It's slow pace centered on discovery, wonder and play is as engaging and calming as a good aquarium. It's heavy on imagination and creativity and is a beautifully rendered piece of animation. As an adult, I believe that there is a different standard that applies to movies for children. Here the question should be: Would I be upset if my son wanted to watch this movie every single day? I can confidently say that I would not mind watching part of this movie every day. This is a quality piece of art that deserves to beat the crap out of G-Force, so grab your kids, and take them to Ponyo, something that will have them entertained and giggling without the use of the word "hizzok". Matt Sameck Follow @FanboyPlanet Our Friends: Amazon.com Widgets
http://legacy.fanboyplanet.com/movies/matt-ponyo.php
CC-MAIN-2019-22
refinedweb
837
64.14
Hi, As per my knowledge (please correct me if I am wrong), the Datanodes sends the block report to both Active and Standby Namenodes. The job of Active NN is to write to the Journal Nodes and the job of Standby namenode is to read from Journal nodes. Now why does Standby namenode need to read from Journal nodes when the Datanodes (slaves) are already sending the block reports to it? Created 05-22-2019 03:23 AM The above was originally posted in the Community Help Track. On Wed May 22 03:20 UTC 2019, a member of the HCC moderation staff moved it to the Hadoop Core track. The Community Help Track is intended for questions about using the HCC site itself. Read carefully the bold text of the Quorum Journal nodes, it states why the standby namenode reads the edits from the journal node and does not read the block reports from the Active Namenode so it has nothing to do with reading the block reports from the JN but the FSImage and edits files. a the. DataNodes. The namespace ID is assigned to the creation of new replicas of those blocks on other DataNodes. Heartbeats from a DataNode also carry information about total storage capacity, the. High Availability The. Standby Namenode It does three things: Thus at any time, the standby namenode contains an up-to-date image of the namespace both in memory and on local disk(s). The cluster will switch over to the new name-node (this standby-node) if the active namenode dies Quorum Journal Nodes QJN is HDFS implementation that provides edit logs. It permits to share these edit logs between the active and standby NameNode. Standby Namenode communicates and synchronizes with the active NameNode for high availability. It will happen by a group of daemons called “Journal nodes”. The Quorum Journal Nodes runs as a group of journal nodes. At least three journal nodes should be there. For N journal nodes, the system can tolerate at most (N-1)/2 failures. The system thus continues to work. So, for three journal nodes, the system can tolerate the failure of one {(3-1)/2} of them. Whenever data nodes and they send block location information and heartbeats to both NameNode. HTH I am happy this compilation has helped give
https://community.cloudera.com/t5/Support-Questions/Why-does-Standby-Namenode-read-from-Journals-Nodes-when/m-p/240764/highlight/true
CC-MAIN-2021-25
refinedweb
388
62.68
Ticket #16298 (closed defect: fixed) guest additions do not build with 4.10 kernel -> fixed in 5.1.x and later releases after 19 Dec 2016 Description Got an error message while attempting to install guest additions in a Fedora Rawhide guest for the 4.10.0-0.rc0.git2.1.fc26.x86_64 kernel. vboxadd.sh: failed: Look at /var/log/vboxadd-install.log to find out what went wrong. Attachments Change History Changed 2 years ago by robatino - attachment vboxadd-install.log added comment:1 Changed 2 years ago by michael This should fix it. Will commit later. See this kernel commit: Index: src/VBox/Additions/linux/drm/vbox_fb.c =================================================================== --- src/VBox/Additions/linux/drm/vbox_fb.c (revision 112216) +++ src/VBox/Additions/linux/drm/vbox_fb.c (working copy) @@ -216,7 +216,6 @@ struct drm_gem_object **gobj_p) { struct drm_device *dev = fbdev->helper.dev; - u32 bpp, depth; u32 size; struct drm_gem_object *gobj; #if LINUX_VERSION_CODE < KERNEL_VERSION(3, 3, 0) @@ -226,7 +225,6 @@ #endif int ret = 0; - drm_fb_get_bpp_depth(mode_cmd->pixel_format, &depth, &bpp); size = pitch * mode_cmd->height; ret = vbox_gem_create(dev, size, true, &gobj); comment:2 Changed 2 years ago by michael - Summary changed from guest additions do not build with 4.10 kernel to guest additions do not build with 4.10 kernel -> fixed in 5.1.x and later releases after 19 Dec 2016 The fix should be available in 5.1 and higher Guest Additions builds<1> as of revision 112422. comment:3 Changed 2 years ago by robatino Thanks. I want to test the 112428 GA, but unfortunately neither of my installed 4.10 kernels are bootable. I can boot into a 4.9 kernel. Is it possible to boot into one kernel and install the Guest Additions for another? (This used to be possible via a "dkms install" command, when the GA used DKMS.) comment:4 Changed 2 years ago by robatino Installed the 112428 Guest additions, then installed the newest 4.10 kernel. The GA build still fails. Attaching vboxadd-install.log. Changed 2 years ago by robatino - attachment vboxadd-install.2.log added /var/log/vboxadd-install.log comment:5 Changed 2 years ago by frank - Status changed from new to closed - Resolution set to fixed Fix is part of VBox 5.1.12. comment:6 Changed 2 years ago by robatino Still broken with 5.1.12, after installing the new GA, restarting the guest, and installing a new 4.10 kernel. Attaching vboxadd-install.log. comment:7 Changed 2 years ago by robatino - Status changed from closed to reopened - Resolution fixed deleted Changed 2 years ago by robatino - attachment vboxadd-install.3.log added /var/log/vboxadd-install.log comment:8 Changed 2 years ago by robatino Also please update the Version to "VirtualBox 5.1.12", since I don't have permission. Thanks. comment:9 follow-up: ↓ 10 Changed 2 years ago by Lieven Tytgat Dear The compile error refers to kernel.h from the linux kernel. The following changes to kernel.h introduced the compile error. It seems the vars 'true' and 'false' in the taint_flag struct are overwritten by the macro defined in vboxdrv/include/iprt/types.h lines 264 - 270 Not so well picked var names in kernel.h I would assume... comment:10 in reply to: ↑ 9 Changed 2 years ago by frank Replying to Lieven Tytgat: Not so well picked var names in kernel.h I would assume... Exactly. I hope that this patch gets merged into the vanilla kernel before 4.10 is released. comment:11 Changed 2 years ago by Larry998 That patch changing those badly chosen names has been accepted, but it is not yet merged into mainline. Be aware that that change is not sufficient for VB to build under kernel 4.10-rc2 and later as register_cpu_notifier() and unregister_cpu_notifier() are removed. Code must now use the hotplug state machine to register cpu on- and off-line events. This change is not trivial, and I have not yet come up with a fix. Larry Changed 2 years ago by Larry998 - attachment 4.9_patches added Crude hack to allow VB to build on kernel 4.10-rc2 and later comment:12 Changed 2 years ago by Larry998 The attachment posted above will allow a successful build once my patch to fix the "true" and "false" redefinition problem has been applied. comment:13 Changed 2 years ago by Lieven Tytgat Linux 4.10-rc5 will rename the true and false chars in c_true and c_false, fixing this issue. Commit: comment:14 Changed 2 years ago by sergiomb I think we have new compile error with Linux 4.10-rc5.git1.1.fc26.x86_64 I made this fix: seems to me trivial when we read comment:15 follow-up: ↓ 16 Changed 2 years ago by frank comment:16 in reply to: ↑ 15 Changed 2 years ago by sergiomb comment:17 Changed 2 years ago by robatino Confirming that guest additions build for kernel-4.10.0-0.rc7.git0.1.fc26.x86_64 when using VB 5.1.14 and VBoxGuestAdditions_5.1.15-113104.iso. comment:18 Changed 2 years ago by robatino Filed for guest additions not building with the 4.11 kernel (while using the latest VBoxGuestAdditions_5.1.15-113518.iso guest additions). comment:19 Changed 2 years ago by frank - Status changed from reopened to closed - Resolution set to fixed Fixed in VBox 5.1.16. /var/log/vboxadd-install.log
https://www.virtualbox.org/ticket/16298
CC-MAIN-2019-22
refinedweb
904
67.65
res_init() Initialize the Internet domain name resolver routines Synopsis: #include <sys/types.h> #include <netinet/in.h> #include <arpa/nameser.h> #include <resolv.h> int res_init( void ); Description: The resolver routines are used for making, sending, and interpreting query and reply messages with Internet domain name servers. The res_init() routine reads the resolver configuration file to get the default domain name, search list, and Internet address of the local name servers. If no server is configured, the host running the resolver is tried. If not specified in the configuration file, the current domain name is defined by the hostname; the domain name can be overridden by the environment variable LOCALDOMAIN. Initialization normally occurs on the first call to one of the resolver routines. Resolver configuration Global configuration and state information used by these routines is kept in the __res_state structure _res, which is defined in <resolv.h>. Since most of the values have reasonable defaults, you can generally ignore them. The _res.options member is a simple bit mask that contains the bitwise OR of the enabled options. The following options are defined in <resolv.h>: - RES_DEBUG - Print debugging messages. - RES_DEFNAMES - If this option is set, res_search() appends the default domain name to single-component names (those that don't contain a dot). This option is enabled by default. - RES_DNSRCH - If this option is set, res_search() searches for hostnames in the current domain and in parent domains. This is used by the standard host lookup routine, gethostbyname() . This option is enabled by default. - RES_INIT - True if the initial name server address and default domain name are initialized (i.e. res_init() has been called). - RES_RECURSE - Set the recursion-desired bit in queries. This is the default. Note that res_send() doesn't do iterative queries — it expects the name server to handle recursion. - RES_STAYOPEN - Used with RES_USEVC to keep the TCP connection open between queries. This is useful only in programs that regularly do many queries. UDP should be the mode you normally use. - RES_USEVC - Instead of UDP datagrams, use TCP connections for queries. Based on: RFC 974, RFC 1032, RFC 1033, RFC 1034, RFC 1035 Returns: - 0 - Success. - Nonzero - An error occurred. Environment variables: - LOCALDOMAIN - When set, LOCALDOMAIN contains a domain name that overrides the current domain name. Last modified: 2013-12-23
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/r/res_init.html
CC-MAIN-2014-15
refinedweb
382
59.9
Dear All, I am facing a problem in passing 2D array of pointers to a function. can any one give some suggestions on this. How to access the array elements. The aray declaration is *ptr[2][3] = { {"AA","XX","FF"}, {"DD","QQ","EE"}, }; It would be great helpful for me,since I am struggling with this one Edited 3 Years Ago by happygeek: fixed formatting. Hi, Thanks for your reply. But I have diffrent types of 2D array of pointers which varies rows and columns size, So in the function declaration/definition I can not specify the rows and columns size. So I am trying for an alternative. >So in the function declaration/definition I can not specify the rows and columns size. Then you can't use an array. The size of the first dimension may be omitted, but the others are required, and the size of an array dimension must be a compile-time constant. If you want to pass a two dimensional array of any size to a function, you use dynamic memory: #include <stdlib.h> void foo ( int ***arg ); int main ( void ) { int ***ptr = malloc ( 2 * sizeof *ptr ); int i; for ( i = 0; i < 2; i++ ) ptr[i] = malloc ( 3 * sizeof *ptr[i] ); foo ( ptr ); for ( i = 0; i < 2; i++ ) free ( ptr[i] ); free ( ptr ); return 0; } You have more options in C++, but you neglected to mention what language you're using. here is how you initialize the function and pass the values char *function(char **array) { //your code here return 0; } int main() { //put values in array function(p); //passing to function } Maybe you want to take a look at my code: #include <iostream> using namespace std; void func(char **& ref_to_ptr); /* Function declaration */ int main(void) { /* Declare the '2D Array' */ char ** ptr = new char * [5]; ptr[3] = new char[20]; /* Put some data in the array */ ptr[3] = "k"; /* Print the first value on the screen */ cout << "First value: " << ptr[3] << endl; /* Pass the array by reference to the function 'func()' */ func(ptr); /* Again we print on the screen what's in the '2D Array' */ cout << "Second value: " << ptr[3] << endl; /* Wait for the user to press ENTER */ cin.get(); /* Cleanup */ delete[] ptr; /* Tell the Operating System that everything went well */ return 0; } void func(char **& ref_to_ptr) { /* This function demonstrates how to change the value of it's argument(s) */ ref_to_ptr[3] = "tux4
https://www.daniweb.com/programming/software-development/threads/40016/passing-2d-array-of-pointers-into-a-function
CC-MAIN-2016-50
refinedweb
400
61.7
This code connects to a HTTPS site and I am assuming I am not verifying the certificate. But why don't I have to install a certificate locally for the site? Shouldn't I have to install a certificate locally and load it for this program or is it downloaded behind the covers? Is the traffic between the client to the remote site still encrypted in transmission? import java.io.BufferedReader; import java.io.InputStreamReader; import java.io.Reader; import java.net.URL; import java.net.URLConnection; TestSSL { final(); final Reader reader = new InputStreamReader(con.getInputStream()); final BufferedReader br = new BufferedReader(reader); String line = ""; while ((line = br.readLine()) != null) { System.out.println(line); } br.close(); } // End of main } // End of the class // The reason why you don't have to load a certificate locally is that you've explicitly chosen not to verify the certificate, with this trust manager that trusts all certificates. The traffic will still be encrypted, but you're opening the connection to Man-In-The-Middle attacks: you're communicating secretly with someone, you're just not sure whether it's the server you expect, or a possible attacker. If your server certificate comes from a well-known CA, part of the default bundle of CA certificates bundled with the JRE (usually cacerts file, see JSSE Reference guide), you can just use the default trust manager, you don't have to set anything here. If you have a specific certificate (self-signed or from your own CA), you can use the default trust manager or perhaps one initialised with a specific truststore, but you'll have to import the certificate explicitly in your trust store (after independent verification), as described in this answer. You may also be interested in this answer.
https://codedump.io/share/mdcnrgkOaMSt/1/java-and-https-url-connection-without-downloading-certificate
CC-MAIN-2018-09
refinedweb
295
53.71
The device specification proto defines basic layout of a device as well as the gate set and serialized ids that can be used. This specification can be used to find out specific characteristics of though. Though several standard Google devices are defined for your convenience, specific devices may have specialized layouts particular to that processor. For instance, there may be one or more qubit "drop-outs" that are non-functional for whatever reason. There could also be new or experimental features enabled on some devices but not on others. This specification is defined in the Device proto within cirq.google.api.v2. Gate Set Specifications Most devices can only accept a limited set of gates. This is known as the gate set of the device. Any circuits sent to this device must only use gates within this set. The gate set portion of the protocol buffer defines which gate set(s) are valid on the device, and which gates make up that set. Gate Definitions Each gate in the gate set will have a definition that defines the id that the gate is serialized as, the number of qubits for the gates, the arguments to the gate, the duration, and which qubits it can be applied to. This definition uses "target sets" to specify which qubits the operation can be applied to. See the section below for more information. Gate Durations The time it takes the device to perform each gate is stored within the device specification. This time is stored as an integer number of picoseconds. Example code to print out the gate durations for every gate supported by the device is shown below: import cirq # Create an Engine object to use. engine = cirq.google.Engine(project_id='your_project_id') # Replace the processor id to get the device specification with that id. spec = engine)) Note that, by convention, measurement gate duration includes both the duration of "read-out" pulses to measure the qubit as well as the "ring-down" time that it takes the measurement resonator to reset to a ground state. Target Sets Generally, most gates apply to the same set of qubits. To avoid repeating these qubits (or pairs of qubits) for each gate, each gate instead uses a target set to define the set of qubits that are valid. Each target set contains a list of valid targets. A target is a list of qubits. For one-qubit gates, a target is simply a single qubit. For two qubit gates, a target is a pair of qubits. The type of a target set defines how the targets are interpreted. If the target set is set to SYMMETRIC, the order of each target does not matter (e.g. if gate.on(q1, q2) is valid, then so is gate.on(q2, q1)). If the target type is set to ASYMMETRIC, then the order of qubits does matter, and other orderings of the qubits that are not specified in the definition cannot be assumed to be valid. The last type is PERMUTATION_SET. This type specified that any permutation of the targets is valid. This is typically used for measurement gates. If q0, q1 and q2 are all specified as valid targets for a permutation set of the gate, then gate.on(q0), gate.on(q1), gate.on(q2), gate.on(q0, q1), gate.on(q0, q2), gate.on(q1, q2) and gate.on(q0, q1, q2) are all valid uses of the gate. Developer Recommendations This is a free form text field for additional recommendations and soft requirements that should be followed for proper operation of the device that are not captured by the hard requirements above. For instance, "Do not apply two CZ gates in a row." Serializable Devices The cirq.google.SerializableDevice class allows someone to take this device specification and turn it into a cirq.Device that can be used to verify a circuit. The cirq.google.SerializableDevice combines a DeviceSpecification protocol buffer (defining the device) with a SerializableGateSet (that defines the translation from serialized id to cirq) to produce a cirq.Device that can be used to validate a circuit. The following example illustrates retrieving the device specification live from the engine and then using it to validate a circuit. import cirq import cirq.google as cg # Create an Engine object to use. engine = cg.Engine(project_id='your_project_id', proto_version=cirq.google.ProtoVersion.V2) # Replace the processor id to get the device with that id. device = engine.get_processor('processor_id').get_device( gate_sets=[cg.gate_sets.SQRT_ISWAP_GATESET]) q0, q1 = cirq.LineQubit.range(2) # Raises a ValueError, since this is not a supported gate. cirq.Circuit(cirq.CZ(q0,q1), device=device) Note that, if network traffic is undesired, the DeviceSpecification can easily be stored in either binary format or TextProto format for later usage.
https://quantumai.google/cirq/google/specification
CC-MAIN-2021-10
refinedweb
794
56.55
Tutorial How To Implement a Modal Component in React Introduction Modals are separate windows within an application, most often as a dialog box. They are a common user interface pattern for providing information or requiring confirmation. In this tutorial you will learn about how to implement a modal component in your React project. You’ll create a Dashboard component to manage state and a button to access the modal. You’ll also develop a Modal component to build a modal and a button to close. Your project will display and close a modal upon clicking a button. Prerequisites To complete this tutorial, you will need: - A basic understanding of React before starting this tutorial. You can learn more about React by following the How to Code in React.js series. Step 1 — Starting the Dashboard Component The dashboard is where you will display your modal. To begin your dashboard, import an instance of React and the Component object into your Dashboard.js file. Declare a Dashboard component and set your state: import React, { Component } from "react"; class Dashboard extends Component { constructor() { super(); this.state = { show: false }; this.showModal = this.showModal.bind(this); this.hideModal = this.hideModal.bind(this); } showModal = () => { this.setState({ show: true }); }; hideModal = () => { this.setState({ show: false }); }; } export default Dashboard Your state includes the property show with the value of false. This allows you to hide the modal until a user prompts it to open. The functions showModal() updates your state with the .setState() method to change the value of your show property to true when a user opens the modal. Likewise, the .setState() method in your hideModal() function will close the modal and change the value in your show property to false. Note: Remember to bind your functions on the constructor() using the .bind() method. Once you’ve applied your state and functions, your render() lifecycle method will handle displaying your modal within the return() statement: import React, { Component } from "react"; class Dashboard extends Component { // ... render() { return ( <main> <h1>React Modal</h1> <button type="button" onClick={this.showModal}> Open </button> </main> ); } } export default Dashboard The button accepts the React JSX attribute onClick to apply the .showModal() function and open your modal. You will export your Dashboard component to a higher-order App component connected to your root HTML file. Step 2 — Building the Modal Component Create a new file, Modal.js and declare a stateless functional Modal component with three arguments, handleClose, show, and children. The argument show represents the show property on your state: import './modal.css'; const Modal = ({ handleClose, show, children }) => { const showHideClassName = show ? "modal display-block" : "modal display-none"; return ( <div className={showHideClassName}> <section className="modal-main"> {children} <button type="button" onClick={handleClose}> Close </button> </section> </div> ); }; Your return() statement passes the argument children, represented as props.children, is a reference to the functionality of opening and closing the modal. The modal also contains a button with a the React JSX onClick attribute that accepts the hideModal() method, here represented as the argument handleClose, passed as props into your Dashboard component. The variable showHideClassName assigns its value a conditional to check if the value of the show property in your state is true. If so, the modal will appear. Else, the modal will hide. The properties display-block and display-none to show and hide the modal are handled through the modal.css file imported into your Modal component. Start a new file, modal.css, and set the rules to style the size, color, and shape of your Modal: .modal { position: fixed; top: 0; left: 0; width:100%; height: 100%; background: rgba(0, 0, 0, 0.6); } .modal-main { position:fixed; background: white; width: 80%; height: auto; top:50%; left:50%; transform: translate(-50%,-50%); } .display-block { display: block; } .display-none { display: none; } This will produce a centered modal with a white box outline and a shaded background. With your Modal component complete, let’s integrate your component into your Dashboard. Step 3 — Incorporating the Modal Component To combine your Modal into your Dashboard, navigate to your Dashboard.js file and import your Modal component below your React insantiation: import React, { Component } from "react"; import Modal from './Modal.js'; class Dashboard extends Component { // ... render() { return ( <main> <h1>React Modal</h1> <Modal show={this.state.show} handleClose={this.hideModal}> <p>Modal</p> </Modal> <button type="button" onClick={this.showModal}> Open </button> </main> ); } } export default Dashboard In your return() statement, include your Modal component to display and hide the modal. The attributes show and handleClose are props from your Modal component to manage the logic of your state and the hideModal() function. Your Dashboard and Modal components will now render on your browser: Observe how your new Modal component now opens and close. Conclusion In this tutorial, you learned how React can be used to implement modals by passing props and functionality from one component to another. To view this project live, here is a CodePen demo of this project.
https://www.digitalocean.com/community/tutorials/react-modal-component
CC-MAIN-2021-49
refinedweb
823
57.16
Hey guys, I recently wrote this program and it compiles and works but I have two issues that need addressing and I'm a bit confused on how to solve them. Question 1) System.out.println("What's your name?"); name = Scan.nextLine(); System.out.println("How old are you?"); age = Scan.nextInt(); I always thought Scan.next and so had to start with a lowercase s. It does not work with a lower case s (works with a capital S though), I get this error when I do use it: Error: AgeStatus.java:18: cannot find symbol symbol : variable scan location: class AgeStatus name = scan.nextLine(); ^ 1 error ----jGRASP wedge: exit code for process is 1. ----jGRASP: operation complete. (the program still runs but I'd like to learn why I still got a compile error and I'm a newb programmer so any help is appreciated :P) and my second question is: How can I put everything on one line? when I run the program it outputs as this: ----jGRASP exec: java AgeStatus What's your name? Dude How old are you? 26 Dude is 26 years old and is young. ----jGRASP: operation complete. I'd like my program to read out as: Dude is 26 years old and is young. Here is my program: // AgeStatus.java import java.util.Scanner; public class AgeStatus { // Prompts for name and age public static void main(String[] args) { Scanner Scan = new Scanner (System.in); String name; int age; System.out.println("What's your name?"); name = Scan.nextLine(); System.out.println("How old are you?"); age = Scan.nextInt(); System.out.println(name+" is "+age+" years old and is"); if(age < 30) { System.out.println("young."); } else if( age >= 30 && age < 60) { System.out.println("middle-aged."); } else if( age > 60) { System.out.println("old!"); } } } Thanks in advance!
https://www.daniweb.com/programming/software-development/threads/312489/print-all-on-one-line
CC-MAIN-2019-04
refinedweb
306
69.79
Building a Notepad Application from Scratch with Ionic (StencilJS) using StencilJS. I have attempted to strike a balance between optimised/best-practice code, and something that is just straight-forward and easy enough for beginners to understand. Sometimes the “best” way to do things can look a lot more complex and intimidating, and don’t serve much of a purpose for beginners until they have the basics covered. You can always introduce more advanced concepts to your code as you continue to learn. I will be making it a point to take longer to explain smaller concepts in this tutorial, more so than in some of my other tutorials, since it is targeted at people right at the beginning of their Ionic journey. However, I will not be able to cover everything in the level of depth required if you do not already have somewhat of an understanding. You may still find yourself stuck on certain concepts. I will mostly be including just enough to introduce you to the concept in this tutorial, and I will link out to further resources to explain those concepts in more depth where possible. - Displaying data -.6.2 This tutorial will assume that you have read at least some introductory content about Ionic, have a general understanding of what it is, and that you have everything that you need to build Ionic applications set up on your machine. You will also need to have a basic understanding of HTML, CSS, and JavaScript. If you do not already have everything set up, or you don’t have a basic understanding of the structure of an Ionic/StencilJS project, take a look at the additional resource below for a video walkthrough. Additional resources: 1. Generate a New Ionic Project Time to get started! To begin, we will need to generate a new Ionic/StencilJS application. We can do that with the following command: npm init stencil At this point, you will be prompted to pick a “starter” from these options: ? Pick a starter › - Use arrow-keys. Return to submit. ❯ ionic-pwa Everything you need to build fast, production ready PWAs app Minimal starter for building a Stencil app or website component Collection of web components that can be used anywhere We will want to choose the ionic-pwa option, which will create a StencilJS application for us with Ionic already installed. Just select that option with your arrow keys, and then hit Enter. You will also need to supply a Project name, you can use whatever you like for this, but I have called my project ionic-stencil-notepad. After this step, just hit Y when asked to confirm. You can now make this new project your working directory by running the following command: cd ionic-stencil-notepad and you can run the application in the browser with this command: npm start 2. Create the Required Pages/Services This application will have two pages: - Home (a list of all notes) - Detail (the details of a particular note) and we will also be making use of the following services/providers: - Notes - Storage The Notes service will be responsible for handling most of the logic around creating, updating, and deleting notes. The Storage service will be responsible for helping us store and retrieve data from local storage. We are going to create these now so that we can just focus on the implementation later. The application is generated with an app-home component (we use “components” as our “pages”) by default, so we can keep that and make use of it, but we will get rid of the app-profile component that is also automatically generated. Delete the following folder: src/components/app-profile Create the following files in your project: src/components/app-detail/app-detail.tsx src/components/app-detail/app-detail.css src/services/notes.ts src/services/storage.ts We have created an additional app-detail component for our Detail page - this component includes both a .tsx file that will contain the template and logic, and a .css file for styling. We also create a notes.ts file that will define our Notes service and a storage.ts file that will define our Storage service. Notice that these files have an extension of .ts which is just a regular TypeScript file, as opposed to the .tsx extension the component has which is a TypeScript + JSX file. If you are not already familiar with JSX/TSX then I would recommend that you read the following resource before continuing. Additional resources: 3. Setting up Navigation/Routing Now we move on to our first real bit of work - setting up the routes for the application. The “routes” in our application determine which page/component to show when a particular URL path is active. We will already have most of the work done for us by default, we will just need to add/change a couple of things.-nav /> </ion-app> ); } } Routes in our Ionic/StencilJS application are defined by using the <ion-route> component. If you require more of an introduction to navigation in an Ionic/StencilJS application, check out the additional resource below: Additional resources: We have kept the default route for app-home but we have also added an additional /notes route that will link to the same component. This is purely cosmetic and is not required. By doing this, I think that the URL structure will make a little more sense. For example, to view all notes we would go to /notes and to view a specific note we would go to /notes/4. We’ve also added a route for the app-detail component that looks like this: /notes/:id By adding :id to the path, which is prefixed by a colon : we are creating a route that will accept parameters which we will be able to grab later (i.e. URL parameters allow us to pass dynamic values through the URL)./components/app-home/app-home.tsx to reflect the following: import { Component, h } from "@stencil/core"; @Component({ tag: "app-home", styleUrl: "app-home.css" }) export class AppHome {/logic for our Home page just yet, let’s make a few minor changes so that we are ready for later functionality. There are two key bits of functionality that we will want to interact with from our pages in this application. The first is Ionic’s ion-alert-controller which will allow us to display alert prompts and request user input (e.g. we will prompt the user for the title of the note when they want to create a new note). The second is Ionic’s ion-router, so that we can programatically control navigation (among other things which we will touch on later). To use this functionality, we need to get a reference to the Ionic web components that provide that functionality - this is done quite simply enough by using document.querySelector to grab a reference to the actual web component in the document. If you are not familiar with this concept, I would recommend first watching the additional resource below: Additional resources: We already have an <ion-router> in our application (since that is used to contain our routes), so we can just grab a reference to that whenever we need it. However, in order to create alerts we will need to add the <ion-alert-controller> to our application. We will add this to the root components template.-alert-controller /> <ion-nav /> </ion-app> ); } } Notice that we have added <ion-alert-controller> in the template above. Now we will just need to grab a reference to that in our home page (and we are going to add a couple more things here as well). Modify src/components/app-home/app-home.tsx to reflect the following: import { Component, h } from "@stencil/core"; @Component({ tag: "app-home", styleUrl: "app-home.css" }) export class AppHome { componentDidLoad() {} addNote() { const alertCtrl = document.querySelector("ion-alert-controller"); console.log(alertCtrl); }> ]; } } You can see that we have created an addNote method that we will use for creating new notes. We haven’t fully implemented this yet, but we have created the reference to the ion-alert-controller that we need in order to launch the alert prompt that will ask the user for the title of their note. We have also added a componentDidLoad lifecycle hook - this functions just like a regular method, except that it will be triggered automatically as soon as the home component has loaded. We will make use of this later. It’s going to be hard for us to go much further than this without starting to work on our Notes service, as this is what is going to allow us to add, update, and delete notes. Without it, we won’t have anything to display! 5. Creating an Interface Before we implement our Notes service, we are going to define exactly “what” a note is by creating our own custom type with an interface. StencilJS below. Create a folder and file at src: - Creating Custom Interfaces (NOTE: This video is for Angular, but the concepts are mostly the same) 6. Implement the Notices Service The “page” components in our application are responsible for displaying views/templates on the screen to the user. Although they are able to implement logic of their own, Since our Notes service will rely on adding data to, and retrieving data from, the browsers local storage (which will allow us to persist notes across application reloads), we should tackle creating the Storage service first. Modify src/services/storage.ts to reflect the following: const storage = window.localStorage; export function set(key: string, value: any): Promise<void> { return new Promise((resolve, reject) => { try { storage && storage.setItem(key, JSON.stringify(value)); resolve(); } catch (err) { reject(`Couldnt store object ${err}`); } }); } export function remove(key: string): Promise<void> { return new Promise((resolve, reject) => { try { storage && storage.removeItem(key); resolve(); } catch (err) { reject(`Couldnt remove object ${err}`); } }); } export function get(key: string): Promise<any> { return new Promise((resolve, reject) => { try { if (storage) { const item = storage.getItem(key); resolve(JSON.parse(item)); } resolve(undefined); } catch (err) { reject(`Couldnt get object: ${err}`); } }); } The main purpose of this service is to provide three methods that we can easily use to interact with local storage: get, set, and remove. This service allows us to contain all of the “ugly” code in one place, and then throughout the rest of the application we can just make simple calls to get, set, and remove to store the data that we want. For more information about how browser based local storage works, and a more advanced solution for dealing with storage, check out the additional resource below. The solution detailed in the tutorial below will make use of the best storage mechanism available depending on the platform the application is running on (e.g. on iOS and Android it will use native storage, instead of the browsers local storage). This is generally a better solution than the above, but it does depend on using Capacitor in your project. Additional resources: Now let’s implement the code for our Notes service, and then talk through it. I’ve added comments to various parts of the code itself, but we will also talk through it below. import { set, get } from "./storage"; import { Note } from "../interfaces/note"; class NotesServiceController { public notes: Note[]; async load(): Promise<Note[]> { if (this.notes) { return this.notes; } else { this.notes = (await get("notes")) || []; return this.notes; } } async save(): Promise<void> { return await set("notes", this.notes); } getNote(id): Note {(); } updateNote(note, content): void { // Get the index in the array of the note that was passed in let index = this.notes.indexOf(note); this.notes[index].content =(); } } } export const NotesService = new NotesServiceController(); First of all, if you are not familiar with the general concept of a “service” in StencilJS, I would recommend reading the following additional resource first. Additional resources: At the top of the file, we import our Storage methods that we want to make use of, and the interface that we created to represent a Note. Inside of our service, we have set up a notes class member which will be an array of our notes (the Note[] type means it will be an array of our Note type we created). Variables declared above the methods in our service (like in any class) will be accessible throughout the entire class using this.notes. Our load function is responsible for loading data in from storage (if it exists) and then setting it up on the this.notes array. If the data has already been loaded we return it immediately, otherwise we load the data from storage first. If there is no data in storage (e.g. the result from the get call is null) then we instead return an empty array (e.g. []) instead of a null value. This method (and others) are marked as async which means that they run outside of the normal flow of the application and can “wait” for operations to complete whilst the rest of the application continues executing. In this case, if we need to load data in from storage then we need to “wait” for that operation to complete. It is important to understand the difference between synchronous and asynchronous code behaviour, as well as how async/await works. If you are not already familiar with this, then you can check out the following resources. Additional resources: - Understanding Asynchronous Code - Using Async/Await Syntax (NOTE: This tutorial is for Angular, but the same concepts apply) The save function in our service simply sets our notes array on the notes key in storage so that it can be retrieved later - we will call this whenever there is a change in data, so that when we reload the application the changes are still there.. The updateNote method will find a particular note and update its content, and the deleteNote method will find a particular note and remove it. 7. Finishing the Notes Page With our notes service in place, we can now finish off our Home page. We will need to make some modifications to both the class and the template.> ]; } } We have now added a call to the load method of the Notes service, which will handle loading in the data from storage as soon as the application has started. This will also set up the data on the notes class member in the home page. Notice that we have also decorated our notes class member with the @State() decorator - since we want the template to update whenever notes changes, we need to add the @State() decorator to it (otherwise, the template will not re-render and it will continue to display old data). For more information on this, check out the additional resource below. Additional resources: The addNote() method will now also allow the user to add a new note. We will create an event binding in the template later to tie a button click to this method, which will launch an Alert prompt on screen. This prompt will allow the user to enter a title for their new note, and when they click Save the new note will be added. Since creating the alert prompt is “asynchronous” we need to mark the addNote method as async in order to be able to await the creation of the alert prompt. In the “handler” for this prompt, we trigger adding the new note using our service, and we also reload the data in our home page so that it includes the newly added note by reassigning this.notes. The reason we use this weird syntax: this.notes = [...(await NotesService.load())]; instead of just this: this.notes = await NotesService.load(); Is because in order for StencilJS to detect a change and display it in the template, the variable must be reassigned completely (not just modified). In the second example the data would be updated, but it would not display in the template. The first example creates a new array like this: this.notes = [/* values in here */] and inside of that new array, the “spread” operator (i.e. ...) is used to pull all of the values out of the array returned from the load call, and add them to this new array. For a real world analogy, consider having a box full of books. Instead of just taking a book out of the box full of books to get the result we want, we are getting a new empty box and moving all of the books over to this new box (except for the books we no longer want). This is a round-a-bout way of doing the exact same thing, but the difference is that by moving all of our books to the new box StencilJS will be able to detect and display the change. This isn’t really intuitive, but if you ever run into a situation in StencilJS where you are updating your data but not seeing the change in your template, it’s probably because you either: - Didn’t use the @Statedecorator, or - You are modifying an array/object instead of creating a new array/object Now we just need to finish off the template. Modify the template in have modified our button in the header section to include an onClick event binding that is linked to the addNote() method. This will trigger the addNote() method we just created whenever the user clicks on the button. We have also modified our notes list: <ion-list> {this.notes.map(note => ( <ion-item button detail href={`/notes/${note.id}`} <ion-label>{note.title}</ion-label> </ion-item> ))} </ion-list> Rather than just having a single item, we are now using a map which will loop over each note in our notes array. Since we want to view the details of an individual note by clicking on it, we set up the following href value: href={`/notes/${note.id}`} By using curly braces here, we are able to render out the value of whatever note.id is in our string. In this case, we want to first evaluate the expression /notes/${note.id} to something like '/notes/12' before assigning it to the href,>. An interpolation, which is an expression surrounded by curly braces, is just a way to evaluate an expression before rendering it out on the screen. Therefore, this will display the title of whatever note is currently being looped over in our map. These concepts are expanded upon in the tutorial on JSX, so make sure to check that out if you are feeling a little lost at this point. Additional resources: just the logic, and then we will implement the template. Modify src/components/app-detail/app-detail.tsx import { Component, State, Prop, h } from "@stencil/core"; import { Note } from "../../interfaces/note"; import { NotesService } from "../../services/notes"; @Component({ tag: "app-detail", styleUrl: "app-detail.css" }) export class AppDetail { public navCtrl = document.querySelector("ion-router"); @Prop() id: string; @State() note: Note = { id: null, title: "", content: "" }; async componentDidLoad() { await NotesService.load(); this.note = await NotesService.getNote(this.id); } noteChanged(ev) { NotesService.updateNote(this.note, ev.target.value); NotesService.save(); } deleteNote() { setTimeout(() => { NotesService.deleteNote(this.note); }, 300); this.navCtrl.back(); } render() { return [ <ion-content /> ]; } } In this page, we use the @Prop decorator. If we give this prop the same name as the parameter we set up in our <ion-route> to pass in the id this will allow us to get access to the value that was passed in through the URL. We want to do that because we need to access the id of the note that is supplied through the route, e.g: In our componentDidLoad method, we then use that id to grab the specific note from the notes service. However, we first have to check if the data has been loaded yet since it is possible to load this page directly through the URL (rather than going through the home page first). To make sure that the data has been loaded, we just make a call to the load method from our Notes service. The note is then assigned to the this.note class member. Since the note is not immediately available to the template, we intialise an empty note with blank values so that our template won’t complain about data that does not exist. Once the note has been successfully fetched, the data in the template will automatically update since we are using the @State decorator. back method of the ion-router. We actually wait for 300ms using a setTimeout before we delete the note, since we want the note to be deleted after we have navigated back to the home page. Modify the template in src/components/app-detail/app-detail.tsx to reflect the following: <ion-header> <ion-toolbar <ion-buttons <ion-back-button </ion-buttons> <ion-title>{this.note.title}</ion-title> <ion-buttons <ion-button onClick={() => this.deleteNote()}> <ion-icon </ion-button> </ion-buttons> </ion-toolbar> </ion-header>, <ion-content <ion-textarea onInput={ev => this.noteChanged(ev)} value={this.note.content} </ion-content>. Whenever this value changes, we trigger the noteChanged method and pass in the new value. This will then save that new value. This is a rather simplistic approach to forms. If you would like a more advanced look at how to manage forms in StencilJS, take a look at this preview chapter from my book Creating Ionic Applications with StencilJS: Additional resources: We will actually need to make one more change to our home page to finish off the functionality for the application. Currently, the home page loads the data when the component first loads. However, since we can now delete notes, that means the data on the home page might need to change as a result of what happens on our detail page. To account for this, we are going to set up a listener that will detect every time that the home page is loaded, and we will be able to run some code to refresh the data.[] = []; public navCtrl = document.querySelector("ion-router"); async componentDidLoad() { this.navCtrl.addEventListener("ionRouteDidChange", async () => { now have a reference to the ion-router and we set up a listener for the ionRouteDidChange event which will be triggered every time this page is activated./global/app.scss file. You will find a bunch of CSS variables in this file that can be modified (if you aren’t familiar with CSS4 variables, I would recommend checking out the additional resources section near the end of this. Add the following to the bottom of src/global/app.scss: ///global/app.css: /* Document Level Styles *//components/app-home/app-home.c below on Shadow DOM as this is a big part of styling in Ionic. Modify src/components/app-detail/app-detail.css to reflect the following: ion-textarea { background: #fff !important;. If you are looking for something to speed up your Ionic/StencilJS learning journey, then take a look at the Creating Ionic Applications with StencilJS book.
https://www.joshmorony.com/building-a-notepad-application-from-scratch-with-ionic-and-stencil-js/
CC-MAIN-2019-51
refinedweb
3,837
60.65
Chapter 24 Interoperability 24. 24.2 Interchanging with Seurat Figure 24.1: Need to add this at some point. 24.3 Interchanging with scanpy Figure 24.2: Need to add this at some point..18.1 rebook_1.0.0 loaded via a namespace (and not attached): [1] bookdown_0.21 codetools_0.2-18 XML_3.99-0.5 [4] ps_1.5.0 digest_0.6.27 stats4_4.0.3 [7] magrittr_2.0.1 evaluate_0.14 highr_0.8 [10] graph_1.68.0 rlang_0.4.9 stringi_1.5.3 [13] callr_3.5.1 rmarkdown_2.5 tools_4.0.3 [16] stringr_1.4.0 processx_3.4.5 parallel_4.0.3 [19] xfun_0.19 yaml_2.2.1 compiler_4.0.3 [22] BiocGenerics_0.36.0 BiocManager_1.30.10 htmltools_0.5.0 [25] CodeDepends_0.6.5 knitr_1.30
https://bioconductor.org/books/release/OSCA/interoperability.html
CC-MAIN-2021-17
refinedweb
128
57.74
Behaviours Use Cases Setting a Default Description This simple example can be used as a tutorial. Further worked examples can be found in the Recipes section - Behaviours Examples. To create a new Behaviour. Go to the Administration screen, and click the Behaviours link in the Behaviours section, or press ggor .and type Behaviours. Enter a Name and Description for the new behaviour. Click on Fields. Click the Create Initialiser link. Enter the following in the Script section (leave the first two fields blank): def desc = getFieldById("description") def defaultValue = """h2. How to reproduce * step 1 * step 2 h2. Expected Result The widget should appear h2. Actual Result The widget doesn't appear""".replaceAll(/ /, '') if (!underlyingIssue?.description) { (1) desc.setFormValue(defaultValue) } Click Update. Click the Add one now link to map this behaviour to a project. Select one or more projects, then click Add Mapping. This is a very simple configuration and should be used to check everything is working. For information about Jira Service Desk mappings, see Using Behaviours with Service Desk. Create a new issue in the project associated with this behaviour. You should see the default description. Using. Live Editing Using an inline script can be painful as you have to keep clicking buttons and saving. It’s more productive to point to a file so it can be updated automatically. When configured like this you can modify your code without even leaving the Resolve Issue dialog, let alone doing a page refresh. Live Editing for IDE Users If using an IDE you can get code completion by adding the following lines at the beginning of your script: import com.onresolve.jira.groovy.user.FieldBehaviours import groovy.transform.BaseScript @BaseScript FieldBehaviours fieldBehavi.
https://scriptrunner.adaptavist.com/6.9.0/jira/behaviours-use-cases.html
CC-MAIN-2020-50
refinedweb
284
60.92
in reply to Re^4: Perl Modules -- abstraction and interfacesin thread Perl Modules I'm for sure not the best monk in the monastery to explain this, but it seems you liked my way to express it. > are they inherited into html (package I'd add) automagically? No, they are not. You have two options: The plain, easy way: is let Exporter to do his job via @EXPORT_OK as already said. Read through manual to know more about it. Everything in your module Project::Display (capitalize first char as idiomatic (idiomatic IS good thing) common practice), module that use Exporter (and also Project::Display states that @ISA = qw(Exporter); but more on this later) every sub defined in your module and that also is in @EXPORT_OK can be usable from other scripts or modules that include something like use Project::Display qw(a_sub_you_put_in_EXPORTER_OK another_one etc); and this is good. Keep the whole thing simple, well named, ordered, and with some sense and everything will run smooth. The ObjectOriented way: in this path to know a little of terminology is another good thing to do: perldoc has perlglossary just in case, but for Objected Oriented Perl (OO for brevity) the principal source of wisdom will be object oriented fundamentals that is in perlootut Going to this path maybe trickier, but perhaps you are more inclined to program OO than in other ways. One of basic concepts is inheritance: with the already seen @ISA you put in your Project::Display module you asserted that your module IS-A Exporter: practically if a method (new term from OO, but is simply a sub..) is not found in Project::Display will be searched into every module (well package) you put into @ISA array. So Project::Display::Console and Project::Display::HTML will both have @ISA = qw (Project::Display); very soon stated. You than in your consumer script that uses this modules/classes you create an object, a simple scalar emitted by a constructor defined in the module: by tradition that special constructor sub, defined in a module/class/package is named new and this sub will bless that scalar: ie it marks this scalar as belonging to a particular module/class/package. Doing so you will be able to do my $tv = Project::Display->new ( more=> "params", can_be => "supplied"); in your script and if Project::Display defines a sub format_70_chars then your $tv object can call this method: $tv->format_70_chars( $some_text ); But now you want to use inheritance and want to sublcass Project::Display and have a handy Project::Display::Console class to be able to draw text into a boxes done with - and | signs. You create this package/module/class stating that this package @ISA is a Project::Display object. This Project::Display::Console will NOT have his own new method: it inherits it directly form Project::Display (you'll learn how to accomplish this correctly) but this new class just define a draw_in_box method. If everything is wrote correctly only an objected created as instance of the class Project::Display::Console can call draw_in_box while objects created as Project::Display::HTML cannot. But both ojects inherits the new method from Project::Display base class. Take a read of perlobj and consider to read the objects part of the must have ModernPerl free book (even if it just show the Moose implementation and not the plain perl OO I showed you). ouch! i wrote a lot! read your manual now! L* the next question on my mind....i'll use code to describe the question, cuz it gets my point across far easier than trying to use words to describe the question package Blah::Blah::BlackSheep; use strict; use warnings; # etc, etc...etc... my $loggedin = cookie_get("loggedin"); # does exactly what you think i +t does... my $page = get_param(get_constant($db, "QUERY_PAGE")); # the page the +user wants to visit # cookie_get, get_param, get_constant are all subs I created to ease m +y coding a little, despite it being a tad...convoluted. it works. t +hat's what matters if (not allowed($page, $loggedin)) { print cookie_set("error", "Access Denied!"); print "location: /\n\n"; exit 1; } sub something { return "blah"; } sub something2 { return "bleet"; } # etc, etc, etc... 1; [download] why do i ask? I don't want to have to code the above into EVERY script i write....that's prone to errors, and can be a real B when a change is needed.... is this "good practice" - i mean the bit before the first sub? I would answer this with "no". As you said, it might work, but there are two main reasons I would recommend against it. First a practical point: usually, when you write a module like that, you'd be loading it with use. However, as documented in use, that code gets run in a BEGIN block, that is, while the calling code is still being complied. For some code, it will still work, but on the other hand you may get some unexpected effects (e.g. this thread). Also, modules might not yet be loaded and/or initialized depending on the order of the use statements in the calling code! (Even if you use require to load modules at runtime instead of compile time, note that this will lead to other possibly unexpected effects, like the code in that module still being executed only once, or you having to use parentheses on all your subroutine calls imported from that module.) This is why modules that are intended to be used usually limit themselves to declarations of subs and package variables (Update: and often the exporting of those subs into the caller's namespace, e.g. Exporter), and if they do have initialization code, it is limited to things that are internal to that module without externally visible side effects. (Update 2: Note I'm ignoring more complicated things like custom pragmas here, which are also just special kinds of modules.) Second, something to do more with convention: When I load a module with use, I don't expect that call to have any side effects other than loading that module, but the code you showed not only prints something, it may even kill my whole program with exit! So what can you do instead? If you want to write a module that is intended to be loaded with use, then it's easiest when it only contains sub definitions and perhaps declarations of package variables (our). If it does execute code on loading, then it should only be for its internal initialization, i.e. it shouldn't have any side effects visible to outside code, and it needs to consider that it is being executed in a BEGIN block, as I explained above (Update: i.e., I wouldn't mess with cookies or CGI params that that point yet). If you want to include initialization code with effects visible to the outside code, then for example, put it in a sub init that the user has to explicitly call. Another approach, which I would consider to be less elegant, would be to use do to explicitly execute another Perl file. Unlike require, that Perl file will be executed every time you call do instead of only once, so it's more like calling a sub than loading a module. However, in that case it's not necessarily the best place to put subs etc., so you'd have to consider splitting the code you showed into two parts. (So as you can tell, it's probably easier to just put the init code in a sub init instead.) Sum Short Shell Swift Super Speedy Simple Subliminal Stupendous Slam-dunk Supercalifragilisticexpialidocious Results (54 votes). Check out past polls.
https://www.perlmonks.org/?node_id=1193790
CC-MAIN-2021-25
refinedweb
1,285
57.2
Everything that is related to application development, and other cool stuff... Animation, whether via an animated gif (a sequence of frames) or custom drawing, does not come for free. As a rule of thumb, the more colors and the more frames your animated gif has, the larger the memory footprint and the higher the CPU utilization. With custom painting, the more complex the drawing, the higher is the CPU utilization. So, how bit is the CPU utilization impact? I have conducted a simple test, where I used an animated gif file with 8 frames, each image 25x25 pixels, frames rotated every 0.05 sec, and file size optimized by using transparency color. For comparison, I created a custom control that basically displayed the same animated image (see code below). In the first test run, I used one image and monitored CPU utilization once/sec using perfmon. The animated gif was showing roughly a 0.681% CPU utilization; and custom drawing was recorded as 0% most of the time with spikes to 0.750%. Keep in mind that the animation speed was 20 times faster than perfmon sampling. In the second test run, I used 60 instances of animated gif and compared it against 60 instances of custom drawing implementation. Animated gif recorded an average CPU utilization of 37.729%, varying from 33.594% to 43.750; while custom drawing showed a significantly larger range from 23.438% to 50.781% with the average CPU utilization of 40.183%. Keep in mind that the quality of the implementation does make a difference, and in my case, I threw something together in a few minutes… so, there might be some opportunities for improvement. The tests were run on a Toshiba Tecra M5, 2 GHz, 2 GB. My “rules of thumb”: 1. Use animated GIFs over custom drawing 2. Limit animated GIFs to 4-8 frames and 256 colors or less Note: site has a number of useful, and not too memory/CPU hungry animated GIFs. Custom drawing code used in testing: using System; using System.Collections.Generic; using System.ComponentModel; using System.Drawing; using System.Data; using System.Text; using System.Windows.Forms; namespace AnimatedGIFTest { public partial class CustomDrawing : UserControl { private bool _firstDraw = true; private bool _disposed = false; private Timer _timer; private Point _centerPoint; private int _currentDotNumber = 0; private const int DotCount = 8; private const int DotRadius = 2; private const int Radius = 11; public CustomDrawing() { InitializeComponent(); this.Resize += new EventHandler(CustomDrawing_Resize); _centerPoint = new Point(this.Width / 2, this.Height / 2); _timer = new Timer(); _timer.Tick += new EventHandler(Timer_Tick); _timer.Interval = 80; if (this.DesignMode == false) _timer.Enabled = true; base.Paint += new PaintEventHandler(CustomDrawing_Paint); } void CustomDrawing_Resize(object sender, EventArgs e) Point GetPoint(Point pointCenter, int radius, float angle) // Transform degrees to radians by using (2*Math.PI*angle/360) logic float x = (float)Math.Cos(2 * Math.PI * angle / 360) * radius + pointCenter.X; float y = -(float)Math.Sin(2 * Math.PI * angle / 360) * radius + pointCenter.Y; return new Point((int)x, (int)y); Color DotColor(int dotNumber) Color result; // 8 dots: 3 black, 2 gray, 3 white switch (dotNumber + _currentDotNumber) { case 1: case 5: result = Color.Gray; break; case 2: case 3: case 4: result = Color.White; break; default: result = Color.Black; } return result; int GetDotNumber(int offset) int result; if (_currentDotNumber + offset >= DotCount) result = DotCount - (_currentDotNumber + offset); else result = _currentDotNumber + offset; void CustomDrawing_Paint(object sender, PaintEventArgs e) Pen pen = null; SolidBrush fillBrush = null; try e.Graphics.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias; pen = new Pen(Color.Black); fillBrush = new SolidBrush(Color.Black); float angle; int[] drawPoints; if (_firstDraw == true) drawPoints = new int[] { 0, 1, 2, 3, 4, 5, 6, 7 }; else drawPoints = new int[] { _currentDotNumber, GetDotNumber(1), GetDotNumber(4), GetDotNumber(5) }; foreach (int i in drawPoints) { // Only repaint changed dots angle = 360 - (360 * (i) / 8) + 90; Point pt = GetPoint(_centerPoint, Radius, angle); Rectangle drawRect = new Rectangle(pt.X - DotRadius, pt.Y - DotRadius, DotRadius * 2 + 1, DotRadius * 2 + 1); if (e.ClipRectangle.IntersectsWith(drawRect)) { fillBrush.Color = DotColor(i); e.Graphics.DrawEllipse(pen, drawRect); e.Graphics.FillEllipse(fillBrush, drawRect); } } _firstDraw = false; finally if (pen != null) pen.Dispose(); if (fillBrush != null) fillBrush.Dispose(); void Timer_Tick(object sender, EventArgs e) float angle; _currentDotNumber = GetDotNumber(1); // Only repaint changed dots int[] drawPoints = new int[] { _currentDotNumber, GetDotNumber(1), GetDotNumber(4), GetDotNumber(5) }; foreach (int i in drawPoints) angle = 360 - (360 * (i) / 8) + 90; Point pt = GetPoint(new Point(this.Width / 2, this.Height / 2), Radius, angle); base.Invalidate(new Rectangle(pt.X - DotRadius, DotRadius * 2 + 1, DotRadius * 2 + 1)); _firstDraw = false; } } If you would like to receive an email when updates are made to this post, please register here RSS
http://blogs.msdn.com/irenak/archive/2006/10/19/sysk-222-cpu-impact-of-animated-gifs.aspx
crawl-002
refinedweb
772
51.65
15 August 2012 17:35 [Source: ICIS news] LONDON (ICIS)--?xml:namespace> “With rising food costs, biofuel production can be a contributory factor to hunger in the world,” minister Dirk Niebel said. “The [E10] blending mandate means, at the end of the day, that people will have less food, and that’s why we have to rethink this,” Niebel said in a webcast interview on German television. Niebel also noted that E10 was never popular with German drivers. He added that he is not opposed to biofuels derived from non-food biomass. However, Only about 4% of In addition, the current rise in food prices was primarily the result of the extensive drought
http://www.icis.com/Articles/2012/08/15/9587378/germany-minister-wants-to-halt-e10-to-counter-soaring-food-prices.html
CC-MAIN-2014-52
refinedweb
113
61.67
Tips and tricks from my Telegram-channel @pythonetc, December 2018 It is new selection of tips and tricks about Python and programming from my Telegram-channel @pythonetc. Multiple context managers Sometimes you want to run a code block with multiple context managers: with open('f') as f: with open('g') as g: with open('h') as h: pass Since Python 2.7 and 3.1, you can do it with a single withexpression: o = open with o('f') as f, o('g') as g, o('h') as h: pass Before that, you could you use the contextlib.nested function: with nested(o('f'), o('g'), o('h')) as (f, g, h): pass If you are working with the unknown number of context manager, the more advanced tool suits you well. contextlib.ExitStackallows you to enter any number of contexts at the arbitrary time but guarantees to exit them at the end: with ExitStack() as stack: f = stack.enter_context(o('f')) g = stack.enter_context(o('g')) other = [ stack.enter_context(o(filename)) for filename in filenames ] Objects in the interpreter memory All objects that currently exist in the interpreter memory can be accessed via gc.get_objects(): In : class A: ...: def __init__(self, x): ...: self._x = x ...: ...: def __repr__(self): ...: class_name = type(self).__name__ ...: x = self._x ...: return f'{class_name}({x!r})' ...: In : A(1) Out: A(1) In : A(2) Out: A(2) In : A(3) Out: A(3) In : [x for x in gc.get_objects() if isinstance(x, A)] Out: [A(1), A(2), A(3)] Digit symbols In : int('୧৬༣') Out: 163 0 1 2 3 4 5 6 7 8 9— are not the only characters that are considered digits. Python follows Unicode rules and treats several hundreds of symbols as digits, here is the full list (). That affects functions like int, unicode.isdecimaland even re.match: In : int('෯') Out: 9 In : '٢'.isdecimal() Out: True In : bool(re.match('\d', '౫')) Out: True UTC midnight >>> bool(datetime(2018, 1, 1).time()) False >>> bool(datetime(2018, 1, 1, 13, 12, 11).time()) True Before Python 3.5, datetime.time()objects were considered false if they represented UTC midnight. That can lead to obscure bugs. In the following examples if notmay run not because create_timeis None, but because it's a midnight. def create(created_time=None) -> None: if not created_time: created_time = datetime.now().time() You can fix that by explicitly testing for None: if created_time is None. Asynchronous file operations There is no support in Python for asynchronous file operations. To make them non-blocking, you have to use separate threads. To asynchronously run code in the thread, you should use the loop.run_in_executormethod. The third party aiofilesmodule does all this for you providing nice and simple interface: async with aiofiles.open('filename', mode='r') as f: contents = await f.read() Source: habr.com/ru/company/mailru/blog/436322 Only users with full accounts can post comments. Log in, please.
https://habr.com/en/company/mailru/blog/436324/comments/?mobile=no
CC-MAIN-2019-22
refinedweb
491
67.65
NavMeshDataInstance Representing the added navmesh. Adds the specified NavMeshData to the game. This makes the NavMesh data available for agents and NavMesh queries. Returns an instance for later removing the NavMesh data from the runtime. The instance returned will be valid unless the NavMesh data could not be added - e.g. due to running out of memory or navmesh data being loaded from a corrupted file. See Also: NavMeshDataInstance, NavMesh.RemoveNavMeshData. NavMeshDataInstance Representing the added navmesh. Adds the specified NavMeshData to the game. This function is similar to AddNavMeshData above, but the position and rotation specified is applied in addition to the position and rotation where the NavMesh data was baked. using UnityEngine; using UnityEngine.AI; class Example : MonoBehaviour { public NavMeshData data; NavMeshDataInstance[] instances = new NavMeshDataInstance[2]; public void OnEnable() { // Add an instance of navmesh data instances[0] = NavMesh.AddNavMeshData(data); // Add another instance of the same navmesh data - displaced and rotated instances[1] = NavMesh.AddNavMeshData(data, new Vector3(0, 5, 0), Quaternion.AngleAxis(90, Vector3.up)); } public void OnDisable() { instances[0].Remove(); instances[1].Remove(); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/AI.NavMesh.AddNavMeshData.html
CC-MAIN-2019-43
refinedweb
187
52.97
There are multiple headers available for developers and ops people to manipulate cache behavior. The old spec is mixing with the new, there are numerous settings to configure, and you can find multiple users reporting inconsistent behavior. In this post, I focus on explaining how different headers influence the browser cache and how they relate to proxy servers. You're going to find an example of a configuration for Nginx and code for Node.js running Express. In the end, I look into how popular services created in React are serving their web applications. For a single page application, I'm interested in caching JavaScript, CSS, fonts and image files indefinitely and prevent caching HTML files and a Service Worker if you have any. This strategy is viable as my assets files have unique identifiers in the file names. You can achieve the same configuring webpack to include a [hash] or even better a [chunkhash] in the file name of your assets. This technique is called long-term caching. But when you prevent re-downloading, how you then make updates to your website? Maintaining the ability to update the website is why it's so important never to cache HTML files. Every time you visit my site, the browser fetches a fresh copy of the HTML file from the server, and only when there're new script srcs or link hrefs browser is downloading a new asset from the server. Cache-Control Cache-Control: no-store The browser should not store anything about the request and when it's told to no-store. You can use it for HTML and Service Worker script. Cache-Control: public, no-cache or Cache-Control: public, max-age=0, must-revalidate These two are equivalent and, despite the no-cache name, allow for serving cached responses with the exception that the browser has to validate if the cache is fresh. If you correctly set ETag or Last-Modified headers so that the browser can verify that it already has the recent version cached, you and your users are going to save on bandwidth. You can use it for HTML and Service Worker script. Cache-Control: private, no-cache or Cache-Control: private, max-age=0, must-revalidate By analogy, these two are also equivalent. The difference between public and private is that a shared cache (e.g., CDN) can cache public responses but not private responses. The local cache (e.g., browser) can still cache private responses. You use private when you render your HTML on the server and rendered HTML contains user-specific or sensitive information. In framework terms, you don't need to set private for typical Gatsby blog, but you should consider it with Next.js for pages that require authorized access. Cache-Control: public, max-age=31536000, immutable In this example, the browser is going to cache the response for a year according to the max-age directive (606024*365). The immutable directive tells the browser that the content of this response (file) is not going to change, and the browser should not validate its cache by sending If-None-Match (ETag validation) or If-Modified-Since (Last-Modified validation). Use is for your static assets to support long-term caching strategies. Pragma and Expires Pragma: no-cache Expires: <http-date> Pragma is an old header defined in the HTTP/1.0 spec as a request header. Later the HTTP/1.1 spec states that Pragma: no-cache response should be handled as Cache-Control: no-cache, but it's not a reliable replacement due to it's still a request header. I also keep using Pragma: no-cache as OWASP security recommendation. Including Pragma: no-cache header is a precaution due to legacy servers that don't support newer cache control mechanisms and could cache what you don't intend to be cached. Some would argue that unless you have to support Internet Explorer 5 or Netscape, you don't need either Prama or Expires. It comes down to supporting legacy software. Proxies universally understand Expires header, which gives is a slight edge. For HTML files, I keep Expires header disabled or set it to a past date. For static assets, I manage it together with Cashe-Control's max-age via the Nginx expires directive. ETags ETag: W/"5e15153d-120f" or ETag: "5e15153d-120f" ETags are one of the several methods of cache validation. ETag must uniquely identify the resource, and most often, the web server generates fingerprint from the resource content. When the resource changes, it's going to have a different ETag value. There're two types of ETags. Weak ETags equality indicates that resources are semantically equivalent. Strong ETags validation indicates that resources are byte-to-byte identical. You can distinguish between them by the "W/" prefix set for weak ETags. Weak ETags are not suitable for byte-range requests but are easy to generate on the fly. In practice, you are not going to set ETags on your own and let your webserver to handle them. curl -I <http-address> curl -I -H "Accept-Encoding: gzip" <http-address> You may see that when you request a static file from Nginx, it's going to set a strong ETag. When gzip compression is enabled, but you didn't upload compressed files, the on the fly compression results in using weak ETags. By sending the "If-None-Match" request header with the ETag of a cached resource, the browser expects either a 200 OK response with a new resource or an empty 304 Not Modified response, which indicates that cached resource should be used instead of downloading a new one. The less utilized but not less important for frontend developers is a fact that the same optimization can apply to API GET responses and is not limited to static files. If your application receives large JSON payloads, you can configure your backend to calculate and set ETag from the content of the payload (e.g., using md5) and before sending it to the client, compare with the "If-None-Match" request header. If there's a match, instead of sending the payload, send 304 Not Modified to save on bandwidth and improve web app performance. Last-Modified Last-Modified: Tue, 07 Jan 2020 23:33:17 GMT The Last-Modified response header is another cache control mechanism and uses the last modification date. The Last-Modified header is a fallback mechanism for a more accurate ETags. By sending the "If-Modified-Since" request header with the last modification date of a cached resource, the browser expects either a 200 OK response with a newer resource or an empty 304 Not Modified response, which indicates that cached resource should be used instead of downloading a new one. Debugging When you set headers and then test the configuration, make sure you're close to your server with regards to the network. What I mean by that is if you have your server Dockerized, then run the container and test it locally. If you configure a VM, then ssh to that VM and test headers there. If you have a Kubernetes cluster, spin up a pod and call your service from within the cluster. In a production setup, you're going to work with load balancers, proxies, CDNs. At each of those steps, your headers can get modified, so it's much easier to debug knowing your server sent correct headers in the first place. An example of such unexpected behavior can be a Cloudflare removing the ETag header if you have Email Address Obfuscation or Automatic HTTPS Rewrites enabled. Good luck trying to debug it by changing your server configuration! In Cloudflare defense, this behavior is very well documented and makes perfect sense, so it's on you to know your tools. Cache-Control: max-age=31536000 Cache-Control: public, immutable Earlier in this post, I've put "or" in-between of headers in code snippers to indicate that those are two different examples. Sometimes you may notice more than one same header in the HTTP response. It means that both headers apply. Some proxy servers can merge headers along the way. The above example is equal to: Cache-Control: max-age=31536000, public, immutable Using curl is going to give you the most consistent results and ease of running in multiple environments. If you decide to use a web browser regardless, make sure to look at the Service Worker while debugging caching problems. Service Worker debugging is a complex topic for another post. To troubleshoot caching problems, make sure you enable bypassing service workers in the DevTools Application tab. Nginx Configuration Now when you understand what different types of caching headers do, it's time to focus on putting your knowledge into practice. This following Nginx configuration is going to serve Single Page Application that was build to support long-term caching.; First of all, I enabled gzip compression for content types that benefit a Single Page Application the most. For more details on each of the available gzip settings, head to nginx gzip module documentation. location ~* (\.html|\/sw\.js)$ { expires -1y; add_header Pragma "no-cache"; add_header Cache-Control "public"; } I want to match all HTML files together with /sw.js, which is a Service Worker script. Neither should be cached. Nginx expires directive set to negative value sets past Expires header and adds additional Cache-Control: no-cache header. location ~* \.(js|css|png|jpg|jpeg|gif|ico|json)$ { expires 1y; add_header Cache-Control "public, immutable"; } I want to maximize the caching of all my static assets, which are JavaScript files, CSS files, images, and static JSON files. If you host your font files, you can add them as well. location / { try_files $uri $uri/ =404; } if ($host ~* ^www\.(.*)) { set $host_without_www $1; rewrite ^(.*) permanent; } Those two are not related to caching but an essential part of the Nginx configuration. Since modern Single Page Applications support routing for pretty URLs, and my static server is not aware of them, I would like to serve a default index.html for every route that doesn't match a static file. I'm also interested in redirects from URLs with www. to URLs without www. You might not need the last one in case you host your application, where your service provider already does that for you. Express Configuration Sometimes we are unable to serve static files using a reverse proxy server like Nginx. It might be the case that your serverless setup/service provider limits you to using one of the popular programming languages, and performance is not your primary concern. In such a case, you might want to use a server like Express to serve your static files. import express, { Response } from "express"; import compression from "compression"; import path from "path"; const PORT = process.env.PORT || 3000; const BUILD_PATH = "public"; const app = express(); function setNoCache(res: Response) { const date = new Date(); date.setFullYear(date.getFullYear() - 1); res.setHeader("Expires", date.toUTCString()); res.setHeader("Pragma", "no-cache"); res.setHeader("Cache-Control", "public, no-cache"); } function setLongTermCache(res: Response) { const date = new Date(); date.setFullYear(date.getFullYear() + 1); res.setHeader("Expires", date.toUTCString()); res.setHeader("Cache-Control", "public, max-age=31536000, immutable"); } app.use(compression()); app.use( express.static(BUILD_PATH, { extensions: ["html"], setHeaders(res, path) { if (path.match(/(\.html|\/sw\.js)$/)) { setNoCache(res); return; } if (path.match(/\.(js|css|png|jpg|jpeg|gif|ico|json)$/)) { setLongTermCache(res); } }, }), ); app.get("*", (req, res) => { setNoCache(res); res.sendFile(path.resolve(BUILD_PATH, "index.html")); }); app.listen(PORT, () => { console.log(`Server is running{PORT}`); }); This script is mimicking what our Nginx configuration is doing. Enable gzip using the compression middleware. Express Static middleware sets ETag and Last-Modified headers for you. We have to handle sending index.html on our own in case the request doesn't match any knows static file. Examples Finally, I wanted to explore how popular services utilize caching headers. I check headers separately for HTML and CSS or JavaScript files. I also looked at the Server header (if any) as it might give us an exciting insight into the underlying infrastructure. Twitter tries very hard for their HTML files not to end up in your browser cache. It looks like Twitter is using Express to serve us <div id="react-root"> entry point for the React app. For whatever reason, Twitter uses Expiry header, and Expires header is missing. I've looked it up but didn't find anything interesting. Might it be a typo? If you know, please leave a comment. cache-control: no-cache, no-store, must-revalidate, pre-check=0, post-check=0 expiry: Tue, 31 Mar 1981 05:00:00 GMT last-modified: Wed, 08 Jan 2020 22:16:19 GMT (current date) pragma: no-cache server: tsa_o x-powered-by: Express Twitter doesn't have CSS files and is probably using some CSS-in-JS solution. It looks like a containerized application running on Amazon ECS is serving static files. etag: "fXSAIt9bnXh6KGXnV0ABwQ==" expires: Thu, 07 Jan 2021 22:19:54 GMT last-modified: Sat, 07 Dec 2019 22:27:21 GMT server: ECS (via/F339) Instagram doesn't want your browser to cache HTML either and uses valid Expires header set to the beginning of the year 2000; any prior date than current is as good as any. last-modified: Wed, 08 Jan 2020 21:45:45 GMT cache-control: private, no-cache, no-store, must-revalidate pragma: no-cache expires: Sat, 01 Jan 2000 00:00:00 GMT Both CSS and JavaScript files served by Instagram support long term caching and also have an ETag. etag: "3d0c27ff077a" cache-control: public,max-age=31536000,immutable New York Times The New York Times is also using React and serves its articles as server-side rendered pages. The last modification date seems to be a real date that doesn't change with every request. cache-control: no-cache last-modified: Wed, 08 Jan 2020 21:54:09 GMT server: nginx New Your Times assets are also cached for a long time with both Etag and Last-Modified date provided. cache-control: public,max-age=31536000 etag: "42db6c8821fec0e2b3837b2ea2ece8fe" expires: Wed, 24 Jun 2020 23:27:22 GMT last-modified: Tue, 25 Jun 2019 22:51:52 GMT server: UploadServer Wrap up I've created it partially to organize my knowledge, but also I intend to use it as a cheat sheet for configuring current and future projects. I hope you enjoyed reading and found it useful! If you have any questions or would like to suggest an improvement, please leave a comment below, and I'll be happy to answer it! This article has been originally posted on LogRocket's blog: Caching headers: A practical guide for frontend developers Photo by JOSHUA COLEMAN on Unsplash.
https://michalzalecki.com/caching-headers/
CC-MAIN-2020-45
refinedweb
2,465
54.63
This action might not be possible to undo. Are you sure you want toView counterproductiveknown . the Model-ViewView. Views and unit tests are just two different types of ViewModel consumers. as you will soon see. you get loose coupling between the two and entirely remove the need for writing code in a ViewModel that directly updates a view. " Advanced WPF: Understanding Routed Events and Commands in WPF . If you aren't familiar with commanding. This simple step allows for rapid prototyping and evaluation of user interfaces made by the designers.In Glenn Block's excellent article " Prism: Patterns for Building Composite Applications with WPF " in the September 2008 issue. The single most important aspect of WPF that makes MVVM a great pattern to use is the data binding infrastructure. When an application's interaction logic lives in a set of ViewModel classes." from the September 2008 issue. This is a very loosely coupled design. Two other features of WPF that make this pattern so usable are data templates and the resource system. The application has two available workspaces: . such as the look-less control model and data templates. The ViewModel. In fact. It provides a fertile source of examples to help put the concepts into a meaningful context. he explains the Microsoft Composite Application Guidance for WPF. a command on the ViewModel executes to perform the requested action. never the View. it can be difficult to differentiate the two. In addition to promoting the creation of automated regression tests. I find this terminology is much more prevelant in the WPF and Silverlight communities. I created the demo application in Visual Studio 2008 SP1. The bindings between view and ViewModel are simple to construct because a ViewModel object is set as the DataContext of a view. using MVVM makes it much easier to create a smooth designer/developer workflow. The view classes have no idea that the model classes exist. the MVVM pattern would be much less powerful. I will show you how a ViewModel can expose commands to a View. By binding properties of a view to a ViewModel." If it were not for the support for commands in WPF. it is easy to just rip one view out and drop in a new view to render a ViewModel." each of which the user can open by clicking on a command link in the navigation area on the left. All workspaces live in a TabControl on the main content area. while the ViewModel and model are unaware of the view. the model is completely oblivious to the fact that the ViewModel and view exist. Microsoft was using MVVM internally to develop WPF applications. Many aspects of WPF. you can also completely skin the ViewModel because it has no dependencies on specific visual elements. The data binding system also supports input validation. Now it is time to roll up your sleeves and see the pattern in action. the pattern is also popular because ViewModel classes are easy to unit test. for developers who work with visual designers. You can learn more about binding and data templates in my July 2008 article. The application can contain any number of "workspaces. You can declare templates in XAML and let the resource system automatically locate and apply those templates for you at run time. Unlike the Presenter in MVP. you can easily write code that tests it. the term Presentation Model is used to describe the abstraction of a view. I recommend that you read Brian Noyes's comprehensive article. The development team can focus on creating robust ViewModel classes. the testability of ViewModel classes can assist in properly designing user interfaces that are easy to skin. When you are designing an application. I also examined why it is so popular amongst WPF developers. The unit tests run in the Visual Studio unit testing system. those new values automatically propagate to the view via data binding. while the core WPF platform was under construction. Why WPF Developers Love MVVM Once a developer becomes comfortable with WPF and MVVM. In this article.5 SP1. utilize the strong separation of display from state and behavior promoted by MVVM. a ViewModel does not need a reference to a view. If property values in the ViewModel change.NET Framework 3. MVVM is the lingua franca of WPF developers because it is well suited to the WPF platform. and WPF was designed to make it easy to build applications using the MVVM pattern (amongst others). The user can close a workspace by clicking the Close button on that workspace's tab item. In a sense. " Data and WPF: Customize Data Display with Data Binding and WPF . I'll refer to the pattern as MVVM and the abstraction of a view as a ViewModel. exposes data contained in model objects and other state specific to the view. The Demo Application At this point. which provides a standardized way of transmitting validation errors to a view. which pays dividends in many ways. In addition to the WPF (and Silverlight 2) features that make MVVM a natural way to structure an application. which. thus allowing the view to consume its functionality. Throughout this article. If you can write unit tests for the ViewModel without creating any UI objects. against the Microsoft . Connecting the output of both teams can involve little more than ensuring that the correct bindings exist in a view's XAML file. Having a suite of tests for an application's ViewModels provides free and fast regression testing. In fact. The demo application that accompanies this article uses MVVM in a variety of ways. Data templates apply Views to ViewModel objects shown in the user interface. however. Instead. and the design team can focus on making userfriendly Views. The term ViewModel is never used. you can often decide whether something should be in the view or the ViewModel by imagining that you want to write a unit test to consume the ViewModel. Since a view is just an arbitrary consumer of a ViewModel. Lastly. I have reviewed MVVM's history and theory of operation. in turn. When the user clicks a button in the View. such as Microsoft Expression Blend. which helps reduce the cost of maintaining an application over time. performs all modifications made to the model data. The view binds to properties on a ViewModel. Figure 1 Workspaces Only one instance of the "All Customers" workspace can be open at a time. Relaying Command Logic . the UI looks something like Figure 1. but any number of "New Customer" workspaces can be open at once. but that functionality. the new customer's name appears in the tab item and that customer is added to the list of all customers." After running the application and opening some workspaces. Now that you have a high-level understanding of what the demo application does."All Customers" and "New Customer. The application does not have support for deleting or editing an existing customer. Figure 2 New Customer Data Entry Form After filling in the data entry form with valid values and clicking the Save button. are easy to implement by building on top of the existing application architecture. and many other features similar to it. let's investigate how it was designed and implemented. When the user decides to create a new customer. she must fill in the data entry form in Figure 2. _execute = execute. and a reference to the containing ViewModel object is injected into its constructor. } remove { CommandManager.RequerySuggested += value. readonly Predicate<object> _canExecute. ICommand objects exposed by the ViewModel execute. In fact. so that the command has access to private members of its containing ViewModel and does not pollute the namespace. One possible implementation pattern is to create a private nested class within the ViewModel class. #endregion // Fields #region Constructors public RelayCommand(Action<object> execute) : this(execute. However. RelayCommand allows you to inject the command's logic via delegates passed into its constructor.RequerySuggested -= value. you could remove the views' codebehind files from the project and the application would still compile and run correctly. when the user clicks on buttons. the application reacts and satisfies the user's requests. null) { } public RelayCommand(Action<object> execute. } #endregion // Constructors #region ICommand Members [DebuggerStepThrough] public bool CanExecute(object parameter) { return _canExecute == null ? true : _canExecute(parameter). You can think of the command object as an adapter that makes it easy to consume a ViewModel's functionality from a view declared in XAML. Figure 3 The RelayCommand Class public class RelayCommand : ICommand { #region Fields readonly Action<object> _execute. concise command implementation in ViewModel classes. When a ViewModel exposes an instance property of type ICommand. } public event EventHandler CanExecuteChanged { add { CommandManager. Predicate<object> canExecute) { if (execute == null) throw new ArgumentNullException("execute"). creating a nested class that implements ICommand for each command exposed by a ViewModel can bloat the size of the ViewModel class. In the demo application.Every view in the app has an empty codebehind file. RelayCommand is a simplified variation of the DelegateCommand found in the Microsoft Composite Application Library . _canExecute = canExecute. except for the standard boilerplate code that calls InitializeComponent in the class's constructor. Button. That nested class implements the ICommand interface. This works because of bindings that were established on the Command property of Hyperlink. } } public void Execute(object parameter) { . This approach allows for terse. the RelayCommand class solves this problem. and MenuItem controls displayed in the UI. The RelayCommand class is shown in Figure 3. the command object typically uses that ViewModel object to get its job done. Despite the lack of event handling methods in the views. More code means a greater potential for bugs. Those bindings ensure that when the user clicks on the controls. which is part of the ICommand interface implementation. } return _saveCommand. which I will examine in-depth later. The ViewModel classes form the inheritance hierarchy seen in Figure 4. This ensures that the WPF commanding infrastructure asks all RelayCommand objects if they can execute whenever it asks the built-in commands. This problem naturally lends itself to the creations of a ViewModel base class or two. They often need to implement the INotifyPropertyChanged interface. and. so that new ViewModel classes can inherit all of the common functionality from a base class.Save(). be removed from the UI). } #endregion // ICommand Members } The CanExecuteChanged event.RequerySuggested event. in the case of workspaces. public ICommand SaveCommand { get { if (_saveCommand == null) { _saveCommand = new RelayCommand(param => this. param => this.CanSave )._execute(parameter). shows how to configure a RelayCommand with lambda expressions: RelayCommand _saveCommand. they need the ability to close (that is. Figure 4 Inheritance Hierarchy . they usually need to have a user-friendly display name. has some interesting features. It delegates the event subscription to the CommandManager. The following code from the CustomerViewModel class. } } ViewModel Class Hierarchy Most ViewModel classes need the same features. that is not a problem.Having a base class for all of your ViewModels is by no means a requirement. If you prefer to gain features in your classes by composing many smaller classes together. Figure 5 Verifying a Property // In ViewModelBase. such as "View all customers" and "Create new customer. and the bound property on some UI element receives the new value.ThrowOnInvalidPropertyName) throw new Exception(msg). instance property on this object.VerifyPropertyName(propertyName). PropertyChangedEventHandler handler = this. In order for WPF to know which property on the ViewModel object has changed. which is why it implements the commonly used INotifyPropertyChanged interface and has a DisplayName property. ICommand command) . The navigation area on the left-hand side of the main window displays a link for each CommandViewModel exposed by MainWindowViewModel. } } [Conditional("DEBUG")] [DebuggerStepThrough] public void VerifyPropertyName(string propertyName) { // Verify that the property name matches a real. so this little feature can be a huge timesaver. Just like any other design pattern. Upon receiving that notification." When the user clicks on a link. the binding system queries the property. One interesting aspect of ViewModelBase is that it provides the ability to verify that a property with a given name actually exists on the ViewModel object. else Debug. You must be careful to pass the correct property name into that event argument. because changing a property's name via the Visual Studio 2008 refactoring feature will not update strings in your source code that happen to contain that property's name (nor should it). The CommandViewModel class definition is shown here: public class CommandViewModel : ViewModelBase { public CommandViewModel(string displayName. This is very useful when refactoring. MainWindowViewModel exposes a collection of these objects through its Commands property.GetProperties(this)[propertyName] == null) { string msg = "Invalid property name: " + propertyName. Raising the PropertyChanged event with an incorrect property name in the event argument can lead to subtle bugs that are difficult to track down. instead of using inheritance. otherwise.PropertyChanged. handler(this. It exposes a property called Command of type ICommand. if (this. The code from ViewModelBase that adds this useful support is shown in Figure 5. if (TypeDescriptor. e). // public. it can raise the PropertyChanged event to notify the WPF binding system of the new value. the PropertyChangedEventArgs class exposes a PropertyName property of type String. ViewModelBase Class ViewModelBase is the root class in the hierarchy. } } CommandViewModel Class The simplest concrete ViewModelBase subclass is CommandViewModel. if (handler != null) { var e = new PropertyChangedEventArgs(propertyName). The INotifyPropertyChanged interface contains an event called PropertyChanged. WPF will end up querying the wrong property for a new value.Fail(msg).cs public event PropertyChangedEventHandler PropertyChanged. not rules. Whenever a property on a ViewModel object has a new value. protected virtual void OnPropertyChanged(string propertyName) { this. a workspace opens in the TabControl on the main window. MVVM is a set of guidelines. thus executing one of those commands. MainWindow window = new MainWindow(). // When the ViewModel asks to be closed. string <ItemsControl ItemsSource="{Binding Path=Commands}"> <ItemsControl.OnStartup(e). // Allow all controls in the window to // bind to the ViewModel by setting the ." I mean that something removes the workspace from the user interface at run time.xaml. }. The template simply renders each CommandViewModel object as a link in an ItemsControl. } } In the MainWindowResources. That XAML is shown in Figure 6.RequestClose += delegate { window.DisplayName = displayName.In MainWindowResources. private set. Three classes derive from WorkspaceViewModel: MainWindowViewModel. the WorkspaceViewModel class derives from ViewModelBase and adds the ability to close.ItemTemplate> <DataTemplate> <TextBlock Margin="2.xml". } public ICommand Command { get. and CustomerViewModel. base. By "close. var viewModel = new MainWindowViewModel(path).cs protected override void OnStartup(StartupEventArgs e) { base.6"> <Hyperlink Command="{Binding Path=Command}"> <TextBlock Text="{Binding Path=DisplayName}" /> </Hyperlink> </TextBlock> </DataTemplate> </ItemsControl. MainWindow uses that template to render the collection of CommandViewModels mentioned earlier. NotifyCollectionChangedEventArgs e) { if (e. like so: <!-.xaml.OnWorkspacesChanged.cs ObservableCollection<WorkspaceViewModel> _workspaces.// DataContext. Each tab item has a Close button whose Command property is bound to the CloseCommand of its corresponding WorkspaceViewModel instance. window.DataContext = viewModel. MainWindowViewModel monitors the RequestClose event of its workspaces and removes the workspace from the Workspaces collection upon request. _workspaces. the App class responds by calling the window's Close method.In MainWindow. called Workspaces.Count != 0) foreach (WorkspaceViewModel workspace in e.NewItems. } } void OnWorkspacesChanged(object sender. } MainWindow contains a menu item whose Command property is bound to the MainWindowViewModel's CloseCommand property.NewItems) . } return _workspaces. The main window contains a TabControl whose ItemsSource property is bound to that collection. public ObservableCollection<WorkspaceViewModel> Workspaces { get { if (_workspaces == null) { _workspaces = new ObservableCollection<WorkspaceViewModel>(). which propagates down // the element tree. Since the MainWindow's TabControl has its ItemsSource property bound to the observable collection of WorkspaceViewModels. <ContentPresenter Content="{Binding Path=DisplayName}" /> </DockPanel> </DataTemplate> When the user clicks the Close button in a tab item. removing an item from the collection causes the corresponding workspace to be removed from the TabControl.Show(). window. When the user clicks on that menu item.NewItems != null && e. that WorkspaceViewModel's CloseCommand executes. An abridged version of the template that configures each tab item is shown in the code that follows. Figure 8 Removing Workspace from the UI // In MainWindowViewModel. and the template explains how to render a tab item with a Close button: <DataTemplate x: <DockPanel Width="120"> <Button Command="{Binding Path=CloseCommand}" Content="X" DockPanel. That logic from MainWindowViewModel is shown in Figure 8. The code is found in MainWindowResources.CollectionChanged += this. causing its RequestClose event to fire. it uses that template to render the ViewModel object referenced by the tab item's Content property.cs [TestMethod] public void TestCloseAllCustomersWorkspace() { // Create the MainWindowViewModel.Count.Remove(sender as WorkspaceViewModel).OldItems) workspace. target. } In the UnitTests project. in WPF a non-visual object is rendered by displaying the results of a call to its ToString method in a TextBlock. "Wrong viewmodel type created.OnWorkspaceRequestClose. // Ensure the correct type of workspace was created. } Applying a View to a ViewModel MainWindowViewModel indirectly adds and removes WorkspaceViewModel objects to and from the main window's TabControl. which means that the resources it contains are in the window's resource scope.Workspaces. "Workspaces isn't empty.Count. By relying on data binding.DisplayName == "View all customers"). "Did not create viewmodel.AreEqual(0.Workspaces. If it finds one. if (e."). If WPF tries to render one of your ViewModel objects. The ease with which you can create unit tests for ViewModel classes is a huge selling point of the MVVM pattern. CommandViewModel commandVM = target.OldItems. // Find the command that opens the "All Customers" workspace. but not the MainWindow. Assert. MainWindowViewModel target = new MainWindowViewModel(Constants.Execute(null). a user control) to render it. allCustomersVM. A typed DataTemplate does not have an x:Key value assigned to it.Execute(null).AreEqual(0. Assert.").Commands. ViewModelBase is not a UI element.RequestClose += this. Assert. That clearly is not what you need.RequestClose -= this. That test method is shown in Figure 9.First(cvm => cvm. When a tab item's content is set to a ViewModel object.workspace. because it allows for simple testing of application functionality without writing code that touches the UI.Workspaces. The MainWindowResources. but it does have its DataType property set to an instance of the Type class. the Content property of a TabItem receives a ViewModelBase-derived object to display."). // Tell the "All Customers" workspace to close.Workspaces[0] as AllCustomersViewModel.Workspaces.AreEqual(1. it will check to see if the resource system has a typed DataTemplate in scope whose DataType is the same as (or a base class of) the type of your ViewModel object. That dictionary is added to the main window's resource hierarchy. the MainWindowViewModelTests. var allCustomersVM = target.xaml file has a ResourceDictionary. } void OnWorkspaceRequestClose(object sender.CUSTOMER_DATA_FILE). --> . By default. Figure 9 The Test Method // In MainWindowViewModelTests. target. target. a typed DataTemplate from this dictionary supplies a view (that is.Count.Command. EventArgs e) { this.").Count != 0) foreach (WorkspaceViewModel workspace in e.OnWorkspaceRequestClose. "Did not close viewmodel.IsNotNull(allCustomersVM.cs file contains a test method that verifies that this functionality is working properly. Assert. Figure 10 Supplying a View <!-This resource dictionary is used by the MainWindow.OldItems != null && e. unless your users have a burning desire to see the type name of our ViewModel classes! You can easily tell WPF how to render a ViewModel object by using typed DataTemplates.CloseCommand. so it has no inherent support for rendering itself. commandVM. // Open the "All Customers" workspace. as shown in Figure 10. } set { if (value == _customer.FirstName.ViewModel" xmlns: <vw:CustomerView /> </DataTemplate> <!-.. freeing you up to focus on more important things. The Customer class has nothing in it that suggests it is being used in an MVVM architecture or even in a WPF application. it is possible to programmatically select the view. --> <DataTemplate DataType="{x:Type vm:AllCustomersViewModel}"> <vw:AllCustomersView /> </DataTemplate> <!-This template applies a CustomerView to an instance of the CustomerViewModel class shown in the main window. Now that the general plumbing is in place. Before getting deep into the application's two workspaces.View" > <!-This template applies an AllCustomersView to an instance of the AllCustomersViewModel class shown in the main window. but that is not important. CustomerViewModel does not duplicate the state of a Customer. That class has a handful of properties that represent information about a customer of a company. this application's data model is very small. you can review implementation details more specific to the domain of the application. Note that CustomerViewModel is a wrapper around a Customer object. an instance of the CustomerRepository class loads and stores all Customer objects. The CustomerRepository class exposes a few methods that allow you to get all the available Customer objects.com/winfx/2006/xaml/presentation" xmlns:x=". through a set of properties. "All Customers" and "New Customer. a Web service. and check if a Customer is already in the repository. compared to what real business applications require. add new a Customer to the repository. As long as you have a . it simply exposes it via delegation. a file on disk. a named pipe. but in most situations that is unnecessary. such as their first name. The data could come from a database. and other state used by the CustomerView control. regardless of where it came from. The CustomerAdded event fires when a new Customer enters the CustomerRepository. The WPF resource system does all of the heavy lifting for you. last name. It happens to load the customer data from an XML file. but the type of external data source is irrelevant. Clearly.FirstName) return. In this application. like this: public string FirstName { get { return _customer. --> </ResourceDictionary> You do not need to write any code that determines which view to show for a ViewModel object.<ResourceDictionary xmlns=". or even carrier pigeons: it simply does not matter.NET object with some data in it. Since the application does not allow the user to delete a customer. displayed. It exposes the state of a Customer. and closed by the application shell. the MVVM pattern can get that data on the screen. The design of those classes has almost nothing to do with the MVVM pattern. The class could easily have come from a legacy business library. . because you can create a ViewModel class to adapt just about any data object into something friendly to WPF. The Data Model and Repository You have seen how ViewModel objects are loaded. which existed for years before WPF hit the street. via the AddCustomer method. the repository does not allow you to remove a customer. and e-mail address.. Data must come from and reside somewhere. It provides validation messages by implementing the standard IDataErrorInfo interface. What is important to understand is how the ViewModel classes make use of Customer and CustomerRepository. In more complex scenarios. The sole model class in the demo program is Customer.Other resources omitted for clarity.com/winfx/2006/xaml" xmlns:vm="clr-namespace:DemoApp. CustomerRepository acts as a synchronization mechanism between various ViewModels that deal with Customer objects. Figure 12 The Test Method // In CustomerViewModelTests. In a sense. which would allow for the "unselected" value. which lets the AllCustomersViewModel know that it should add a new CustomerViewModel to its AllCustomers collection. The test method in Figure 12 shows how this functionality works in CustomerViewModel. However. After the user types valid values into the input fields. When CustomerType is set. The Customer Type selector initially has the value "(Not Specified)". I will review more of how this works in the upcoming sections. MainWindowViewModel adds a new CustomerViewModel to its list of workspaces.cs [TestMethod] . it maps the String value to a Boolean value for the underlying Customer object's IsCompany property. There is nothing out of the ordinary here. but it does not meet the needs of the user interface. a well-formed e-mail address.FirstName = value. Perhaps one might think of this as using the Mediator design pattern. a last name. and. This validation logic might make sense from the Customer object's perspective. just a regular data entry form with input validation and a Save button. having a ViewModel comes to the rescue. CustomerViewModel exposes a CustomerTypeOptions property so that the Customer Type selector has three strings to display. What if there is no easy way to persist that "unselected" value because of the existing database schema? What if other applications already use the Customer class and rely on the property being a normal Boolean value? Once again. but for now refer to the diagram in Figure 11 for a high-level understanding of how all the pieces fit together. _customer. If the Customer's IsCompany property returns true. the Save button enters the enabled state so that the user can persist the new customer information. the CustomerViewModel associated with that view will add the new Customer object to the CustomerRepository. Suppose you cannot change the Customer class because it comes from a legacy library owned by a different team in your company. That validation ensures the customer has a first name. you could change the IsCompany property to be of type Nullable<bool>. base. available through its IDataErrorInfo interface implementation. Figure 13 shows the two properties.} } When the user creates a new customer and clicks the Save button in the CustomerView control. How can the UI tell the user that the customer type is unspecified if the IsCompany property of a Customer only allows for a true or false value? Assuming you have complete control over the entire software system. and a CustomerView control displays it. The UI requires a user to select whether a new customer is a person or a company.OnPropertyChanged("FirstName"). The Customer class has built-in validation support. if the customer is a person. Figure 11 Customer Relationships New Customer Data Entry Form When the user clicks the "Create new customer" link. the real world is not always so simple. which stores the selected String in the selector. That causes the repository's CustomerAdded event to fire. It also exposes a CustomerType property. the LastName property cannot have a value (the idea being that a company does not have a last name). } } public string CustomerType { get { return _customerType. "Error message should be returned"). } base. repos).CustomerType = "Person". target. } } The CustomerView control contains a ComboBox that is bound to those properties. target. "Company" }.IsTrue(cust. base. target.IsCompany. "Should be a person"). } else if (_customerType == "Person") { _customer. "Should be a company").IsNullOrEmpty(error).IsCompany = true. "Person".CreateNewCustomer(). CustomerRepository repos = new CustomerRepository( Constants.OnPropertyChanged("LastName"). Assert. } set { if (value == _customerType || String.IsNullOrEmpty(value)) return. CustomerViewModel target = new CustomerViewModel(cust. as seen here: .IsFalse(String.OnPropertyChanged("CustomerType").CustomerType = "(Not Specified)".public void TestCustomerType() { Customer cust = Customer. } Figure 13 CustomerType Properties // In CustomerViewModel.IsCompany.IsCompany = false.IsFalse(cust.CUSTOMER_DATA_FILE).CustomerType = "Company" Assert. } return _customerTypeOptions. _customerType = value. if (_customerType == "Company") { _customer.cs public string[] CustomerTypeOptions { get { if (_customerTypeOptions == null) { _customerTypeOptions = new string[] { "(Not Specified)". Assert. string error = (target as IDataErrorInfo)["CustomerType"]. the data source's IDataErrorInfo interface is queried to see if the new value is valid. } } string ValidateCustomerType() { if (this. } The key aspect of this code is that CustomerViewModel's implementation of IDataErrorInfo can handle requests for ViewModel-specific property validation and delegate the other requests to the Customer object. The ability to save a CustomerViewModel is available to a view through the SaveCommand property. the CustomerViewModel class must handle validating the new selected item in the ComboBox control. The CustomerViewModel // class handles this mapping and validation. and the CustomerViewModel must decide if it is valid. error = this. } // Dirty the commands registered with CommandManager. since Customer has no notion of having an unselected state for the IsCompany property. That occurs because the SelectedItem property binding has ValidatesOnDataErrors set to true.CustomerType == "Company" || this.cs return error. Figure 15 The Save Logic for CustomerViewModel // In CustomerViewModel. That code is seen in Figure 14. Deciding if the new customer is ready to be saved requires consent from two parties. // such as our Save command. This two-part decision is necessary because of the ViewModel-specific properties and validation examined previously.<ComboBox ItemsSource="{Binding CustomerTypeOptions}" SelectedItem="{Binding CustomerType. if (propertyName == "CustomerType") { // The IsCompany property of the Customer class // is Boolean. return "Customer type must be selected". This allows you to make use of validation logic in Model classes and have additional validation for properties that only make sense to ViewModel classes. so it has no concept of being in // an "unselected" state.ValidateCustomerType(). However. That command uses the RelayCommand class examined earlier to allow CustomerViewModel to decide if it can save itself and what to do when told to save its state. The Customer object must be asked if it is valid or not. the binding system asks that CustomerViewModel for a validation error on the CustomerType property. In this application. } else { error = (_customer as IDataErrorInfo)[propertyName]. The save logic for CustomerViewModel is shown in Figure 15.InvalidateRequerySuggested(). Most of the time. CustomerViewModel delegates all requests for validation errors to the Customer object it contains.CustomerType == "Person") return null.this[string propertyName] { get { string error = null.cs string IDataErrorInfo. so that they are queried // to see if they can execute now. Since the data source is a CustomerViewModel object. ValidatesOnDataErrors=True}" /> When the selected item in that ComboBox changes. saving a new customer simply means adding it to a CustomerRepository. Figure 14 Validating a CustomerViewModel Object // In CustomerViewModel. . CommandManager. at most.IsValid) throw new InvalidOperationException(". the view would require a lot of code to make this work properly. } return _saveCommand.ValidateCustomerType()) && _customer. The CustomerViewModel class has no idea what visual elements display it. only contain code that manipulates the controls and resources contained within that view. It accomplishes this by binding the ListView's ItemsSource to a CollectionViewSource configured like Figure 16. you saw how a CustomerViewModel can render as a data entry form. .CanSave ).ContainsCustomer(_customer).AddCustomer(_customer)..IsNullOrEmpty(this. Each ListViewItem represents a CustomerViewModel object in the AllCustomers collection exposed by the AllCustomerViewModel object. bool IsNewCustomer { get { return !_customerRepository.public ICommand SaveCommand { get { if (_saveCommand == null) { _saveCommand = new RelayCommand( param => this. and now the exact same CustomerViewModel object is rendered as an item in a ListView.OnPropertyChanged("DisplayName"). the codebehind for most Views should be empty. } } The use of a ViewModel here makes it much easier to create a view that can display a Customer object and allow for things like an "unselected" state of a Boolean property. if (this. Sometimes it is also necessary to write code in a View's codebehind that interacts with a ViewModel object.IsValid.Save(). which is why this reuse is possible.IsNewCustomer) _customerRepository. All Customers View The demo application also contains a workspace that displays all of the customers in a ListView. The user can select one or more customers at a time and view the sum of their total sales in the bottom right corner. or."). } base. param => this. In the previous section. The UI is the AllCustomersView control. In a well-designed MVVM architecture. which renders an AllCustomersViewModel object. } } bool CanSave { get { return String.. AllCustomersView creates the groups seen in the ListView. The customers in the list are grouped according to whether they are a company or a person. such as hooking an event or calling a method that would otherwise be very difficult to invoke from the ViewModel itself. If the view were bound directly to a Customer object. It also provides the ability to easily tell the customer to save its state. } } public void Save() { if (!_customer. . so that the ContentPresenter beneath the ListView can display the correct number. Mode=TwoWay}" /> </Style> When a CustomerViewModel is selected or unselected.TotalSales : 0. and does not execute in a Release build.AllCustomers. // so that it will be queried again for a new value.SortDescriptions> </CollectionViewSource> The association between a ListViewItem and a CustomerViewModel object is established by the ListView's ItemContainerStyle property. (sender as CustomerViewModel). as seen here: <Style x: <!-Stretch the content of each cell so that we can right-align text in the Total Sales column.IsSelected ? custVM. Figure 17 Monitoring for Selected or Unselected // In AllCustomersViewModel. we must let the // world know that the TotalSelectedSales property has changed.GroupDescriptions> <CollectionViewSource.Sum( custVM => custVM. This is a debugging // technique. PropertyChangedEventArgs e) { string <scm:SortDescription </CollectionViewSource. --> <Setter Property="HorizontalContentAlignment" Value="Stretch" /> <!-Bind the IsSelected property of a ListViewItem to the IsSelected property of a CustomerViewModel object.cs public double TotalSelectedSales { get { return this.GroupDescriptions> <PropertyGroupDescription PropertyName="IsCompany" /> </CollectionViewSource.xaml --> <CollectionViewSource x: <CollectionViewSource.SortDescriptions> <!-Sort descending by IsCompany so that the ' True' values appear first.VerifyPropertyName(IsSelected). // Make sure that the property name we're // referencing is valid. The AllCustomersViewModel class is responsible for maintaining that value. } } void OnCustomerViewModelPropertyChanged(object sender. --> <Setter Property="IsSelected" Value="{Binding Path=IsSelected. One important binding in that Style creates a link between the IsSelected property of a ListViewItem and the IsSelected property of a CustomerViewModel. which enables properties on a ListViewItem to be bound to properties on the CustomerViewModel.0). The Style assigned to that property is applied to each ListViewItem. which means that companies will always be listed before people. // When a customer is selected or unselected. Figure 17 shows how AllCustomersViewModel monitors each customer for being selected or unselected and notifies the view that it needs to update the display value.In AllCustomersView. that causes the sum of all selected customers' total sales to change. In AllCustomersView. It allows you to create a strong separation between data.OnPropertyChanged("TotalSelectedSales"). He was awarded the Microsoft MVP title for his work in the WPF community. The ModelView-ViewModel pattern is a simple and effective set of guidelines for designing and implementing a WPF application.xaml --> <StackPanel Orientation="Horizontal"> <TextBlock Text="Total selected sales: " /> <ContentPresenter Content="{Binding Path=TotalSelectedSales}" ContentStringFormat="c" /> </StackPanel> Wrapping Up WPF has a lot to offer application developers. and presentation. Josh Smith is passionate about using WPF to create great user experiences. and exploring New York City with his girlfriend. I would like to thank John Gossman for his help with this article. if (e. behavior.NET Framework 3.com .} The UI binds to the TotalSelectedSales property and applies currency (monetary) formatting to the value. so if you must target an older version of WPF. .PropertyName == IsSelected) this. Josh works for Infragistics in the Experience Design Group. reading about history.wordpress. The ContentStringFormat property of ContentPresenter was added in the . he enjoys playing the piano. and learning to leverage that power requires a mindset shift. You can visit Josh's blog at joshsmithonwpf. instead of the view. making it easier to control the chaos that is software development.5 SP1. you will need to apply the currency formatting in code: <!-. by returning a String instead of a Double value from the TotalSelectedSales property. The ViewModel object could apply the currency formatting. When he is not at a computer. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/84458413/WPF-Apps-With-the-Model
CC-MAIN-2016-50
refinedweb
5,819
52.56
Ingo Molnar wrote:> * Mike Travis <travis@sgi.com> wrote:> >> Hi Ingo,>>>> Please pull the following 'fairly lightweight' changes for tip/cpus4096.> >> --- a/arch/x86/kernel/apic.c>> +++ b/arch/x86/kernel/apic.c>> @@ -1562,8 +1562,69 @@ void __init init_apic_mappings(void)>> * This initializes the IO-APIC and APIC hardware if this is>> * a UP kernel.>> */>> +>> +#if MAX_APICS < 256>> int apic_version[MAX_APICS];>> >> +#else>> +struct apic_version_info {>> + unsigned int apicid;>> + int version;>> +};>> +>> +struct apic_version_info _apic_version_info[CONFIG_NR_CPUS] __initdata;>> +struct apic_version_info *apic_version_info __refdata = _apic_version_info;>> +int nr_apic_version_info;>> +>> +/* can be called either during init or cpu hotplug add */>> +int __cpuinit add_apic_version(unsigned int apicid, int version)>> +{>> + int i;>> +>> + for (i = 0; i < nr_apic_version_info; i++)>> + if (apicid == apic_version_info[i].apicid) {>> + apic_version_info[i].version = version;>> + return 0;>> + }>> +>> + if (likely(nr_apic_version_info < nr_cpu_ids)) {>> + i = nr_apic_version_info++;>> + apic_version_info[i].apicid = apicid;>> + apic_version_info[i].version = version;>> + return 0;>> + }>> + return -ENOMEM;>> +}>> +>> +/* lookup version for apic, usually first one (boot cpu) */>> +int get_apic_version(unsigned int apicid)>> +{>> + int i;>> +>> + for (i = 0; i < nr_apic_version_info; i++)>> + if (apicid == apic_version_info[i].apicid)>> + return apic_version_info[i].version;>> +>> + return 0;>> +}>> +>> +/* allocate permanent apic_version structure */>> +void __init cleanup_apic_version(void)>> +{>> + size_t size;>> + int i;>> +>> + /* allows disabled_cpus to be brought online */>> + size = nr_cpu_ids * sizeof(*apic_version_info);>> + apic_version_info = alloc_bootmem(size);>> +>> + /* copy version info from initial array to permanent array */>> + for (i = 0; i < nr_apic_version_info; i++)>> + apic_version_info[i] = _apic_version_info[i];>> +}>> +>> +#endif /* MAX_APICS >= 256 */> > this is all but 'lightweight'. A 'lightweight' patch is that which either > is less than say a dozen lines or one that changes a provably trivial > aspect of the kernel. This patch goes to the guts of the APIC code and is > not only complex but also very ugly:> > - it's riddled with #ifdefs> > - it splits the testing space between <256 apics and large system - guess > which one will get 99% of the testing?> > And why is it all done, a hundred lines of very ugly code in a fragile > area of the x86 architecture? Because you want to shrink this array:> > int apic_version[MAX_APICS];> > which can take 128K of RAM if MAX_APICs is 32K ...> > Firstly, if you want to shrink the APIC version array, you might want to > shrink it the most obvious way: by changing the 'int' to 'char' via a > oneliner patch. APIC version values tend to be significantly below 256. > That gives 75% of space savings already.> > Secondly, you might even observe the fact that the set of systems with > assymetric APIC versions approximates that of the nil set. Assymetric SMP > never took off and probably never will - vendors have hard enough time > getting the very same type of CPU working across the board.> > And _even_ if there existed such systems, we already have a lot of other > places where symmetry is assumed and _material_ APIC version assymetry to > the boot CPU's APIC version will very likely not work anyway, on the > physical signalling level.> > So the simplest approach might as well be to turn apic_version into a > single __read_mostly boot_apic_version variable. Maybe also a > WARN_ONCE("whee") message if an APIC is seen with a version different from > that of the boot CPU.> > _Please_ think these changes through because these kinds of mindless > complication patches are not acceptable at all.> > IngoHi Ingo,I did notice that the versions all came up the same, and that the checkswere very specific. I was trying to be as transparent and unintrusive aspossible. Since there's so few calls, I though this was a good approachbut apparently I was wrong.I like the idea of collapsing the array down to one and checking tosee if all apic's have the same version, but is this really the case?Must all apics be the same? Thanks,Mike
http://lkml.org/lkml/2009/1/16/326
CC-MAIN-2014-15
refinedweb
603
59.13
Thanks to the power of code, your pets are never more than a POST request away. Shannon Turner built a Twilio MMS, Raspberry Pi + Django hack so she can see what her winged companion, a lovely parrot, is up to when she’s out. “Any time I miss my pet, a photo of him playing with his toy is only a text message away,” says Shannon. “I’ve had BudgieCam running for just over a month and I’ve already taken over 200 photos and nearly 100 videos.” Building BudgieCam this is what greets me whenever I walk into the room ? ? ? pic.twitter.com/HMIQwsAWsk — Shannon Turner (@svthmc) August 5, 2016 When Shannon is on the road, she sends a text to her Twilio-powered number. Twilio gets that requests and fires off a POST request to Shannon’s Django site which triggers her Raspberry Pi (complete with Raspberry Pi Camera) to take a photo. The photo is stored on Shannon’s Pi server. Then Django instructs Twilio to send an MMS with the photo of Shannon’s pretty bird back to Shannon. This all takes place in the span of a few seconds. Shannon usually doesn’t do this. She normally ships civic minded hacks. Hacks that help women learn to code, recommend movies that pass the Bechdel test, or out politicians whose ideals are quite behind the times. “This hack is just for fun,” says Shannon. If you want to engineer a little fun and a whole lot more pet pictures into your life, here’s the code Shannon used to build BudgieCam on GitHub. Take a look at the Twilio integration below, and the MMS docs that will give you a good foothold in tackling project like this. from django.shortcuts import render from django.views.generic.base import TemplateView import subprocess import time from twilio.rest import TwilioRestClient from budgie_settings import BUDGIE_PASSPHRASE, BUDGIE_FILE_PATH, BUDGIE_WEB_PATH, RASPI_IP from twilio_credentials import ACCOUNT_SID, AUTH_TOKEN class BudgieCamView(TemplateView): def get(self, request, **kwargs): """ Response Code 418: I'm a teapot """ template = 'response.html' context = { 'response': '418' } return render(request, template, context) def post(self, request, **kwargs): """ Twilio is configured to POST to this URL when a text message is received. 1. Receive text message 2. Verify text message and continue if verified 3. Snap photo (use subprocess module) 4. Photo needs to be accessible via a URL 5. Use Twilio API to attach photo to SMS """ text_message = request.POST.get('Body') requesting_phone_number = request.POST.get('From') budgiecam_phone_number = request.POST.get('To') context = {} if text_message: if BUDGIE_PASSPHRASE in text_message.lower(): if 'video' in text_message.lower(): try: budgie_filename = '{0}.h264'.format(''.join(['{0:02d}'.format(x) for x in time.localtime()[:6]])) # raspivid -o video.h264 -t 10000 subprocess.call(['raspivid', '--nopreview', '-t', '30000','-o', '{0}{1}'.format(BUDGIE_FILE_PATH, budgie_filename)]) # This would convert the h264 video to mp4 but unfortunately it doesn't run quickly enough on the Raspberry Pi # Maybe later versions of the Pi would be able to handle it, but this one can't. # try: # print "\t Converting {0} to mp4".format(budgie_filename) # subprocess.call([ # 'ffmpeg', # '-i', # '{0}{1}'.format(BUDGIE_FILE_PATH, budgie_filename), # "{0}{1}.mp4".format(BUDGIE_FILE_PATH, budgie_filename[:-5]) # ]) # except Exception: # print "[ERROR] Failed to convert {0} to mp4".format(budgie_filename) # else: # subprocess.call([ # 'rm', # '{0}{1}'.format(BUDGIE_FILE_PATH, budgie_filename) # ]) # budgie_filename = "{0}.mp4".format(budgie_filename[:-5]) except Exception, e: print "[ERROR] Call to raspivid failed; could not take video ({0}: {1}{2})".format(e, BUDGIE_FILE_PATH, budgie_filename) else: client = TwilioRestClient(ACCOUNT_SID, AUTH_TOKEN) client.messages.create( to=requesting_phone_number, from_=budgiecam_phone_number, body="Video ready here: {0}{1}{2}".format(RASPI_IP, BUDGIE_WEB_PATH, budgie_filename) ) context['response'] = '200' else: try: budgie_filename = '{0}.jpg'.format(''.join(['{0:02d}'.format(x) for x in time.localtime()[:6]])) subprocess.call(['raspistill', '--nopreview', '-t', '5000', '-o', "{0}{1}".format(BUDGIE_FILE_PATH, budgie_filename)]) except Exception, e: print "[ERROR] Call to raspistill failed; could not take photo ({0}: {1}{2})".format(e, BUDGIE_FILE_PATH, budgie_filename) context['response'] = '500' else: client = TwilioRestClient(ACCOUNT_SID, AUTH_TOKEN) client.messages.create( to=requesting_phone_number, from_=budgiecam_phone_number, body="{0}".format(budgie_filename), media_url="{0}{1}{2}".format(RASPI_IP, BUDGIE_WEB_PATH, budgie_filename), ) context['response'] = '200' else: context['response'] = '401' else: context['response'] = '400' template = 'response.html' return render(request, template, context)
https://www.twilio.com/blog/2016/08/shannon-turner-builds-a-pet-cam-using-django-raspberry-pi-and-twilio.html
CC-MAIN-2019-04
refinedweb
697
52.05
Hey I just started computer programming and our teacher wants us to make a mud game using if and else but I am completely lost. I was wondering if anyone had any suggestions to help me get started? Thanks Hey I just started computer programming and our teacher wants us to make a mud game using if and else but I am completely lost. I was wondering if anyone had any suggestions to help me get started? Thanks Lesson 2: If statements C++. You may also want to read some of the other tutorials there as well for actually making a mud type game. Okay, thanks so much. Also do you have any tips on keeping the ifs and elses in order while I do it?Also do you have any tips on keeping the ifs and elses in order while I do it? Thanks. Hm. I was just wondering why codes some have case and others have if and else? What does case do that is different? This is what I have so far (Not much at all... Sort of the structure a little bit I guess... Still trying to figure out where things go, how to fit answers in after the ifs and elses.: Code:#include <cstdlib> 8 if 8 else #include <iostream> #include <string.h> using namespace std; int main(int argc, char *argv[]) string name; int a, b, c; cout<<"Welcome to the Dungeon of Doom."<<endl<<endl; cout<<"Please enter your name."<<endl<<endl; cin>>name; cout<<endl; cout<<"Welcome, "<<name<<" to the Dungeon of Doom."<<endl<<endl; cout<<"You are walking down a dusty, dark forbidding walkway when you reach a set of two doors. You hearing wailing from one door and silence from the other. Which one do you choose? Enter 1 for the wailing door, and 2 for the silent one."<<endl; cin>>a; if (a==1) { cout<<"You chose the right door! You see a guard punishing a wailing prisnor, but because he is busy he does not notice you which allows you to sneak by."<<endl; cout<<"Once you leave the room, you walk down another dark hallway with cages on both side of you. You reach a choice to enter two cages, which one do you choose? The one with the skinny old man sleeping or the one with a forbidding six foot four man. Choose 1 for the old man and 2 for the forbidding one."<<endl; cin>>a; if (a==1) { if else } else { if else } } else { cout<<" "endl; cin>> if (a==1) { if else } else { if else } } system("PAUSE"); return EXIT_SUCCESS; } Lesson 5 - Switch Case As for this project, how much C++ have you been taught? Do you know about file operations, arrays, functions, andything about the STL besides std::string?
https://cboard.cprogramming.com/cplusplus-programming/141084-need-some-help-dev-cplusplus-if-elses.html
CC-MAIN-2017-26
refinedweb
465
82.04
tag:blogger.com,1999:blog-29255509041415438712018-05-28T22:31:03.115-07:00New York LifeSasha<div dir="ltr" style="text-align: left;" trbidi="on">Dear all,<br /><br />Most of you have noticed that my blog has been acting out (technically). The only reason I have is that it was visited by some kind of world web virus. I hope it's not the US government snooping on me :)<br /><br />Anyway, unfortunately, I need to re-direct my blogging activity to another blog of mine.<br /><br />Please follow me at my other blog New York Life:<br /><br />Thank you all, and meet you at my other home!</div><img src="" height="1" width="1" alt=""/>Sasha R'nt Us<div dir="ltr" style="text-align: left;" trbidi="on">Cheers all,<br /><br />So just as soon as I embarked on a journey to find my true path I was cornered by my mother and sister about having a child. They pushed me to the wall, pressed the gun to my uterus and demanded I have a baby right Now.<br /><br />Apparently I am running out of time to join the most exclusive club of motherhood. With every minute celebrating my glorious 30s, I am wasting my life away (according to my mom and my sister).<br /> <br />Well that just pisses me off. Isn't it every woman's right to decide what she wants to do with her life? Why are we still marginalized by the society into making us believe that the only way a woman makes a difference in this world is by procreating. Why those of us who don't participate in increasing the already overcrowded planet are looked at with pity at best?<br /><br /> Why are we still alienated by our own lot (women) for not joining them in what sometimes seems to be a very disappointing and stressful experience?<br /><br />Just because some women find their purpose in having children doesn't mean others do.<br /><br />What about those women who are more conscious about responsibilities motherhood entails. We understand that bringing a child to this world isn't just a bow to our feminine nature. We actually think about the world we would have to bring a new life into, and how it's not the ideal world for a new life. We think about how most food these days is processed, toxic or genetically engineered, and obesity among kids is growing. We think about the polluted air and water, and melting arctic ice. We think how the corporation is controlling our lives. We think of all the civil wars taking place in the world. We think how corrupted our government is. We don't trust our society anymore.<br />Then we stress about our jobs that only give 60 days of maternity leave, and there is no reliable and affordable day care available. We get anxious just thinking about leaving our child with some stranger at a day care, and run to work to be able to pay for it. Then we worry that having a child will put our career at risk, just because it does.<br />We realize that we don't have "the whole village" to raise a child, we only have ourselves, and if we are lucky a reliable partner. <br /><br />If anything we are more responsible and practical about motherhood, and are fully aware whether we are ready to bring a new life into this world or not.<br /><br /><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="" height="271" width="320" /></a>I am not saying that having a child is completely out of the question for me and other women of my generations. But we are not driven by primal instincts, and when or if we decide to have a child it will be a deliberate decision based on weighted options and solid reasons, and the God's will of course.<br /><br /><br /><br /></div><img src="" height="1" width="1" alt=""/>Sasha I - Let it go<div dir="ltr" style="text-align: left;" trbidi="on">Cheers all,<br /><br />Being somewhat an extremist, I tend to get carried away with things. And although being this way pays off when a course of action is obvious, what happens if it's not?<br /><br />Since I started feeling unfulfilled working in the corporate world (about a year ago), I focused mostly on my dissatisfaction with the ways things were. Now, I made a big mistake by focusing on what I didn't want in my life, what was driving me crazy, and seeing what was wrong with the life I created. What do you think happens when we focus on the negative?<br />That's right, we get more and more negative. As a result we shut down all our creative impulses, and get even more disconnected with our soul.<br /><br />When this happens, it's virtually impossible to connect to your inner wisdom, and hear that magical inner voice that knows it all! Well it definitely knows what's best for you.<br /><br />I started feeling the weight on my shoulders (literally - as it lead me to getting chiropractic adjustments), and negative outlook on my life resulted in physical pain. I started having stomach problems and severe back pain.<br /><br />I knew I went too far. I knew it wasn't the way I would find my authentic path, and live my passions. I knew I had to re-direct my focus again.<br /><br />And I did. First thing was to stop beating myself up. You are here now, and there is a reason for that. Just being aware that there is something more than having a job is truly amazing. It's like that quietness before the storm, that's impregnated with wild creative forces that are getting ready to be unleashed. It's magical.<br /><br /><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="" height="320" width="213" /></a>Second, I needed to focus on what is good in my life, which was plenty. I needed to remind myself how far I'd come. Acknowledging your own accomplishments is extremely important. Not only do we focus on positive, we empower our ability to make big changes. Once I looked back and truly reflected on everything I'd done so far, I realized how fearless and strong I was. That boosted my confidence level once again, and propelled me to set out a whole new set of goals.<br /><br />And that's where I am right now. Still not sure what my next step will be but at peace with where I am, and that is all I need. I know there will come a moment when all stars will get aligned for me, and my inner ears will open, and my soul will speak clearly to me, and then I'll be guided to my right path. In the meantime, I'll do my best to stay at peace, and radiate my light no matter where I am.<br /><br />Namaste<br /><br /><br /><div class="MsoNormal"><br /></div></div><img src="" height="1" width="1" alt=""/>Sasha II<div dir="ltr" style="text-align: left;" trbidi="on">Three years ago, as I was going through the second biggest love loss in my life thus far, I turned to writing. Shortly after, "New York Love" was born. Unbeknownst to myself, the following years became the most important to my personal and spiritual growth. I found what I was looking for all along. It was the power of love I already had inside me. I opened my heart and let love surround me from inside out. The results were amazing: I started attracting more positive and loving people into my life, the world was becoming friendlier by day, and most importantly, a loving relationship I was craving for finally entered my life.<br /><br />I thought myself the luckiest gal in this whole NY galaxy.<br />However, after a year of basking in a loving bliss, I started feeling restless again. Now, I am not saying something was wrong. In fact, my life never was such a smooth sail before. However, I started looking beyond my own well-being. I started asking myself: What am I doing to make a difference in this world? What purpose do I have?<br /><br />I truly believe when we are in a healthy relationship we are encouraged to look beyond ourselves, to expand our reach. So here I am, in a loving relationship with a partner who inspires me to look beyond myself, spread my love around, and find my true purpose.<br /><br />First, it comes as a shock to those of us who've spent all their twenties building career in the corporate world, only to discover later that it was all wrong, completely off path. I do feel grateful for certain things that my career in the corporate world gave me: financial independence, wonderful people I met along the way, camaraderie, and security of a monthly check. However, increasingly I start feeling withdrawn from its culture, realizing that there is more to life than working for someone else, longing to make a real difference in this world.<br /><br />I am sure in this age, a lot people start feeling disconnected from their jobs. We are the most evolved society, and longing for authenticity is not a surprise or a rare occurrence these days. Yes, initially I was shocked to discover that after all this years of working on my career, I was actually drifting further away from my true purpose. I wasn't living my passions, I wasn't living my own life. Yet, I was grateful for this awareness. I had a glimpse of what my life would be like if I never worked a day in my life because I LOVED what I did. I started craving for my true path.<br /><br />So here I am, embarking on a new mission to find my true calling, and inviting you to join me. We all deserve to create our own lives, to find our own truth.<br /><br /><b>Summary</b><br />Mission: Find true calling, follow my bliss, make a difference;<br />Agent: Determined female in her early thirties, tired of working for someone else, ready to become her own boss;<br />Current situation: working in the corporate world; not being able to quit before another stable source of income materializes;<br />Resources: my own hunger for knowledge; inspirations from others, personal motivation;<br />Method: by exploring various passions and methods to find a personally, spiritually and financially rewarding career. Action-oriented but still connected to the wisdom within.<br />Test control: By documenting my endeavors on the blog, and drawing logical and relevant conclusions.<br />Start: start where I am and keep going.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" height="315" width="320" /></a></div><br /><br /><br /></div><img src="" height="1" width="1" alt=""/>Sasha<div dir="ltr" style="text-align: left;" trbidi="on">Cheers lovers,<br /><br />Today, I've taken an inventory of my blogging activity, which quite frankly shocked me. I hardly wrote anything this year.<br /><br />The slowdown was inevitable in some way, since I wasn't dating anymore. But the blog became a part of me in the last 3 years, it was my creative outlet, my spiritual outburst. Writing it and expressing myself helped me find myself and most importantly, accept myself completely. In the last year, I tried to convince myself that the mission was accomplished, and now I could move on to the next project. I would occupy myself with many other things, keeping myself busy. But in the end, I had to accept the truth - I get lost without writing, I get off track. It's as if I am shutting a very important part of myself down, the part that is responsible for my creations, the one that connects me with my soul.<br /><br />So I am back.<br /><br />I am back to New York Love, back to you, and to myself.<br /><br />I am planning to set out a new mission for myself that will bring me back here over and over again.<br />I am planning a search for my true calling. For now that's all I am going to share, but tune in all my fellow New Yorkers, those who are searching, those who are curious, for we are on a mission to find our true path. To connect to our soul, to find our passions, to embark on our own life journey.<br /><br />Namaste<br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" height="264" width="320" /></a></div><br /></div><img src="" height="1" width="1" alt=""/>Sasha If...<div dir="ltr" style="text-align: left;" trbidi="on">We stop wasting time when we stop asking ourselves "What if" - Sasha D.<br /><br />Cheers all,<br /><br />Recently I found myself breaking one of the cardinal "happiness" rules: asking myself "What if"?<br />What if I didn't come to NY? What if I stayed closer to my family? What if I went to Journalism school like I wanted? What if I pursued "dream" career? What if, what if, what if....<br /><br />I am deeply aware that dwelling on this question does not only make one unhappy, it also makes one unproductive, unfruitful if you will. Yet sometimes the force is too strong to handle. I am a thinker after all.<br /><br />I've spent months searching my soul; although torturing my soul, would probably be a more suitable reference here. I was trying to look back and imagine what would have happened, would I have felt more fulfilled, would I have felt more purpose then? I was relentless, I even got upset that it was probably too late to change anything now... I was too hard on myself, without even realizing it. I wasn't my own best friend.<br /><br />What happens as a result of this?<br />Besides, feeling dissatisfied with your life, and mounting stress, you start loosing vision of your current life, vision of your future goals. You become dis-attached from the flow of life, you stop listening to your soul. It's as if you gave your soul a time-out blaming it for personally not being in the place you think you should be. The soul doesn't take it well, it gets sad, it cries, it eventually goes into a dark place.<br />Just writing these words, makes me emotional for having been so disregarding and even awful to the most important essence of my whole being, to my soul.<br />But I finally was able to raise my eyes, to open my heart and notice what I was doing. Not a minute will be wasted on this useless questioning.<br />The truth is we'll never know what would have been if we had chosen one way over the other. It could have been better, or it could have been worse. We could have been more accomplished, or not. We could have been more happy, or not. This is the question we'll never be able to answer, just like the question of life. It's not meant to be answered, for it is the essence of life.<br /><br /> I stop, I change my course, I beckon to my soul, I realize that I've done the best I could, I accept myself. and then beyond, I thank myself for being where I am, for being strong, for being present. I thank myself for being alive, and being the light. I let my "what if's" float away on a not returnable ship. I choose to live my life gracefully, even though I might never find the answer. I dare to look at the present, and perhaps the future. I let my soul tell me what IS.<br /><br />And I end up asking myself "What is"? I am still a thinker...<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" height="212" width="320" /></a></div><br /><br /></div><img src="" height="1" width="1" alt=""/>Sasha: Happy Day or a Wake-up Call?<div dir="ltr" style="text-align: left;" trbidi="on">Cheers All,<br /><br />It's been awhile since I came here, and it's been too long since I wrote. I've been letting my NYL blog quietly retire. Pause. I know I just dropped a big word there, the one that dooms the near ending. But life is about endings and beginnings, and everything has its own cycle. I feel, I know that NYL is nearing its end...<br /><br />Anyway, when something stirs in me, I have to release it, and I run back here.<br />Today, Friday in New York city (as in many other places I suspect), I've heard this phrase too many times to ignore it any longer. It seems that today, Friday, has become somewhat of a holiday based on all the "Happy Fridays" flying out almost everyone's mouth at work and around me. I admit "Happy Friday" is nothing new, but did I ever hear it so many times before?! It seemed more festive today than usual.<br /><br />Anyway, it got me thinking. From time to time, my thoughts drift back to this "illusionary holiday", and it's had a considerate evolution over time. I must say it made sense when I was in school, as it seemed like I had no choice but to attend it. However, as soon as I left parents' house (around 19), and started supporting myself completely, I could never quite grasp the meaning of a "Happy Friday".<br />I don't exclude the fact, that working on Friday nights for a year while in college had to do something with it. But it's a mere moment of my life.<br />Ok, as any professional New Yorker, I had my 60 (sometimes 80) hour weeks. Now, however, Friday is officially my last day of work. The reason I am sharing all this information with you is because I want to show that I've been on both sides of the table. And still I've always been deeply disturbed by the "Happy Friday".<br /><br />Why? Well doesn't it seem somewhat pathetic? It's as if we haven't lived for the first 5 days of the week, and only Saturday and Sunday hold salvation. Only during those 2 days can we finally enjoy our lives.<br />Too discriminating to Mon-Fri, and too much pressure on Sat-Sun, if you ask me.<br /><br />But seriously, isn't it too much of our time that we are just getting through? 5 days out of 7 is 71% of our time. Why are we spending 71% of our time in such a way that all we have is to look forward to the remaining 29%? And it's all over again, week after week, and so on.<br /><br />If it's true that we can't wait to get through the week, living from the weekend to the weekend, why not find ways to change it?<br />Is it our complacency that won't let us break this destructive cycle? "Happy Friday" people are not particularly miserable, but they are not happy either. It's as if they are serving time. Isn't life more than that? If you don't think you're living your life while at work, why not change it? Why not look for your path, pursue your passions? It could be as simple as building relationships with colleagues and finding meaning in any work. Work in itself could have a meaning.<br /><br />I don't want to be hard on those who love their Fridays, I myself get caught up in this trap sometimes (though rarely), but I do want to make them think. Make them wonder. When I started noticing that Friday seemed more exciting to me than other days, it made me think, What can I change? How can I make every day count? How can I live my life Monday through Sunday, no day wasted?<br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="" height="220" width="320" /></a><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="" height="274" width="320" /></a></div><br /><br /><br /><br /></div><img src="" height="1" width="1" alt=""/>Sasha<div dir="ltr" style="text-align: left;" trbidi="on">Just because I am a woman...<br /><br />1) All women have an obsession, be it shoes, purses or jewelry. For me it's lingerie. What can I say, I am very private.<br /><br />2) I learned not to trust most women. They change their minds, they are unpredictable. I am woman, I know.<br /><br />3) The hardest relationships I had were with women. And let me add, I am not a lesbian.<br /><br />Just because I am not a man...<br /><br />4) A man could be a good friend to a woman as long as she is willing to believe that they are just friends.<br /><br />5) Men will never understand women intuitively. And thank God!<br /><br />6) In the end, it's still women who hold the cards in a relationship. Not because we have a better poker face, but because we ourselves don't know what those cards mean.<br /><br />Just because I am a New Yorker...<br /><br />7) New Yorkers are not rude. It's just that we are always short on time, and being rude/ignorant is more time-efficient.<br /><br />8) We, New Yorkers, are not afraid of commitment. Living in the city is a relationship of its own which leaves us hardly any time to explore any other relationships.<br /><br />9) It truly is a "Love-hate" relationship with New York. Fortunately, those are consistently balanced. Just when you think you can't take it anymore, something amazing happens, and vice versa. It's high and low on fast forward.<br /><br />10) New York is bigger than life. Nothing and no one owns it. New York owns us. When you come here, it's not about you anymore, it's about New York.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" height="250" width="400" /></a></div><br /></div><img src="" height="1" width="1" alt=""/>Sasha: friend or foe?<div dir="ltr" style="text-align: left;" trbidi="on">If you ask me what is a silent killer of any relationship or any dream, I'll tell you it's a doubt.<br />Naturally, I dislike the feeling, it's the hardest to shake off, it creeps up on you and, if not uprooted, will ultimately erode the foundation of any creation.<br />The truth is we all face doubt at some point, be it in a relationship, work or any other life area.<br /><br />The question is how to deal with it?<br /><br />I wish I had a perfect answer that would apply to all. I don't. Yet I am willing to make an effort and look for a solution. Sometimes just a search for it is already half of a solution.<br /><br />First, I want to answer Can doubt be good at all? Could it help us see something that we don't want to see? Could it be an indicator of an issue we are trying to avoid?<br />If we are still not sure, can we turn to trust? Trust could be the best cure for doubt. But how do you know that the prescription of trust is the right one, and not just a temporary pain killer to subside the pain? And if it's the soul that is hurting (which most likely it is) will it swallow any "prescription" to just numb the pain?<br /><br />Not a big fan of artificial sedatives in any situation, I want to find a natural cure for the soul. <br /><br />Trust is good but it's still a forced feeling that depends on outside factors, it's fragile. I want to be cured from within. I want to be healed eternally.<br /><br />What clearly comes to me is Love. I think my soul just whispered it to my heart. Love is natural, it's the core of our being, of life itself. When love is embraced, all pain is gone. Love soothes the soul from within as an internal, inborn light. Loving yourself first, embracing your fears and doubts, letting them phase in the light of love, letting the light shine through you, loving others. Letting the healing light of love wrap you holistically, and surround you protectively but open you lovingly. It's always within waiting to rescue. It's like an emergency care that never sleeps, that rushes when called. We just need to remember the number to call. The soul knows the number, it's dying to call it. We just need to remember. We just need to surrender. </div><img src="" height="1" width="1" alt=""/>Sasha - Green or desperate?<div dir="ltr" style="text-align: left;" trbidi="on">Cheers my sexy readers!<br /><br />This week I've been going through all of the work-in-progress posts (the ones I started but never finished). Sometimes, we have a brilliant idea, we write it down, but then abandon it due to whatever reasons. I started feeling kind of bad for them (ideas), I imagined them being so excited to be born, almost like little sprouts, but then being halted and neglected. I know I can be very imaginative:)<br /><br />So I've decided to give my darling "scapegoats" a well-deserved right to live.<br /><br />This one is from a year ago. Happened during my workout at the Reebok Sports Club, when I accidentally (if there is such a thing) overheard another member talking to her trainer about relationships.<br /><br />The conversation went something like this: "How did you get married? You fell madly in love with your husband?" (personal trainer asking the woman). Her response: "No! We just kinda got used to each other, and then it was time, so we got married. It'd better be married than see what my single girl-friends go through these days." Trainer: "What do you mean?" Woman: "they can't seem to meet a nice guy, so most of them go back to dating those they dated in the past but didn't want to settle for."<br /><br />Hmmm (said both, the trainer and myself). I remember my reaction a year ago. Besides my natural curiosity for the subject, I felt sad. Isn't it like lowering your standards? It's as if a woman's image of herself suffered from not finding love, and she decided that all she could do is settle for the best available option. Really sad. and Pathetic. I am sorry, but I have to say it. Because having a strong sense of herself is woman's nature. She is a goddess who brings life to this Earth. Why do women forget about their Divine, and degrade themselves due to social misconceptions of being single. It seems that it's more acceptable to be married to a completely wrong person (and ruin life of 2 of them, plus to partners they could have made happy) than being alone.<br /><br />I know it's not all black and white, and there are exceptions to all situations. But in this particular case, it seemed more like a desperate need to be with someone rather than be alone.<br /><br />Why are we so terrified of being alone? Why don't we love ourselves enough to feel compete? Lastly how can we believe that we can find someone to make us happy before we are happy alone?<br /><br />It's been said and will be said many times by me and others: We need to fall in love with ourselves first. Become our own best friend and lover, and then, only then feel compelled to share all this love we have inside. A woman glowing in her Bliss is irresistible. She is a Goddess, she is a wolf. She doesn't recycle men, she finds her wolf to run with.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="213" src="" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"></div></div><img src="" height="1" width="1" alt=""/>Sasha<div dir="ltr" style="text-align: left;" trbidi="on">Cheers lovers,<br /><br />So here we are saying good bye to 2013. I believe it was a better year than before, or that's what we like to believe year after year. But we do have something to be grateful for, and it's not the royal baby coming into this world. It's our families and friends, it's those who've been with us through the year, who called us when we needed them, who held our hand, who made us coffee, who said they loved us...<br /><br />I'd like to end this year with some of my quotes. And in a New Year spirit, there will be 12 of them. <br /><br />1) A secret to happiness is selective memory. Remembering only positive moments, and forgetting negative.<br /><br />2) Standards are just someone's opinions powered by strong conviction.<br /><br />3) Having big ideas for my future is what makes me feel young.<br /><br />4) Sadness is happiness in a bad mood.<br /><br />5) All feelings are beautiful, and deserve to be equally loved. Only then can we see lessons they hold for us.<br /><br />6) I love to take care of myself. It makes me feel like a real man who found his perfect woman.<br /><br />7) If a man wants to understand a woman, he should get a cat.<br /><br />8) A woman is like a cat. Even if she plays hard to get it doesn't mean she doesn't want to be petted.<br /><br />9) Again, a woman is like a cat. She'll come around after you stopped chasing her.<br /><br /><br />And here's my naughty (or mean:) side, and the reason why Santa passed my house this year:(<br /><br /><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="271" src="" width="320" /></a>10) More annoying than a pregnant woman is probably a bride (sorry! and I do admit there are rare exceptions applying to both).<br /><br />11) I think grudges are the worst feelings to hold inside. They are destructive to the soul, mind and body. They hold one captive from loving others and the world. They cloud mind and judgement. Lastly, being happy is virtually impossible if one has grudges.<br /><br />12) I don't like excuses. I can't even swallow them, let alone digest.<br /><br />Happy 2014 to All. Let's not hope, let's make 2014 better than 2013. Simply being nice to others will make a difference. Love to All!<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="285" src="" width="400" /></a></div><br /></div><img src="" height="1" width="1" alt=""/>Sasha Is Everything<div dir="ltr" style="text-align: left;" trbidi="on">Cheers All,<br /><br />As most of you know, I am a big fan of yoga. The benefits are endless, but when I come across a teacher who touches my soul the experience is priceless. I see it as a true blessing.<br /><br />Exactly that happened last week. I wouldn't say he was the best teacher in the world, but wisdom coming from him made me look inside and think. He shared many wonderful thoughts, but one of them touched me the most. While in my down-facing dog, he dropped a wisdom bomb on me. Literally. It resonated with me right away, it was a light bulb moment. Not to keep you in suspense any longer (just testing your patience), here it is:<br /><br />"How you do anything is how you do everything."<br /><br />Sounds simple at first, but not quite. I expect to some it might not make sense at all. Yet some would want to disagree. But if you dare to see the depth of this saying, you'll be able to connect the dots.<br /><br />Let me explain. The teacher was originally referring to a yoga practice. Specifically, to simple poses, basics so to speak. And what he meant was how you do any pose (small or big) is how you do all of them. If you give your best shot with a simple mountain pose you will strive to do the best you can with any challenging pose. He went further and transcended this wisdom from yoga mat to life.<br /><br />Really if you think about it, if you approach any small task with passion and determination, you most likely strive for that same passion in other areas in your life. And vice versa. If you don't care, and just do a half-a**ed job in some areas, most likely it is how you approach other areas in your life. People who are known to do their best with anything do it with everything.<br /><br />I know some might disagree, but those who always strive to be the best they can be, know what I am talking about. They also know when they are not giving their best selves, and deep down they know they are cheating themselves. I know it too well. Always been the one who strives to improve herself, I am painfully aware when I am faking it. The feeling is so destructive to the soul that giving your best self is the not just a solution but the only answer. <br /><br />And how I do anything will ultimately lead to how I do everything (no matter what it is). That is why I wouldn't leave work at 5:57pm, why I wouldn't use excuses to call in sick, or work from home, why I wouldn't leave a yoga class during the final Shavasana pose, why I wouldn't promise unless I was 100% sure I'd keep it, why I wouldn't lie.<br />And it's simple, because my attitude towards anything will determine my attitude towards everything. those 3 minutes won't give me anything but a pathetic excuse for my personal weakness. If I take the low road in one area, how can I expect to ride the high road in another? It's the standard you hold yourself to that determines how you approach anything and everything. <br /><br />Go beyond small and big. Go wholesome. Take the high road every time. </div><img src="" height="1" width="1" alt=""/>Sasha is not a Woman, Woman is not a Man<div dir="ltr" style="text-align: left;" trbidi="on">I know I am stating the obvious (the title) here, but you'll be surprised how often we forget this.<br />Let me explain.<br />How often we imagine in our minds what a man should do, say or think. We women forget that men don't think the way we do, they don't feel the way we do, and they certainly don't understand why we get upset because they didn't react the way we wanted them to. I am sure it goes the other way around too.<br />But since I am a woman, it's easier for me to elaborate from a female's perspective.<br />Here's a scenario. Let's say we want him to be more loving to us. But instead of just coming up to him and telling him directly into his face what we want, we start saying stupid things like "I don't feel like you're present", "I feel distance", etc. In the meantime, what we really mean is that we want more loving, what we really want is for him to say "Don't be silly, I love you so much. Come here". That's what we women would do/say, wouldn't we?<br /><br />But men are not us. They take whatever we say literally. So instead of opening his arms and wrapping them around us, he starts thinking that she's not happy with him, even worse, he doesn't make her happy. And this is probably the most terrible thing he could feel in regard to a relationship since he takes it as a direct accusation of not being enough. If she is not happy with me, I need to go.<br /><br />So you see, instead of getting what we truly wanted we got the absolute opposite. Very ironic to say the least, and could be fatally damaging.<br /><br />By trial and error (and unnecessary tears) I learned to forgo my "female mind tricks", and just simply say what I want. And what a relief, who knew it would be so easy?<br />If it's a good and loving relationship, a partner will respond and give, and give. He wants to make you happy, and if you tell him how, he'll do it.<br /><br />Relationships are real, they need understanding. But it won't come from projecting your personal assumptions. We need to get out of our heads, and start seeing another as an individual with their personal thoughts and perceptions.<br /><br />Good loving to all.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="" width="320" /></a></div><br /></div><img src="" height="1" width="1" alt=""/>Sasha Love<div dir="ltr" style="text-align: left;" trbidi="on">Cheers lovers,<br /><br />So I've been thinking and wondering what direction should I take now that I am in a relationship with a man I love?<br />The truth is I miss my blog, I miss writing about my thoughts, sharing my endeavors. What the hell, I miss talking about love and sex.<br />And then I realized, I don't have to end my blog. In fact, I should come back, I need to come back. I feel it's even more of my duty now that I'm learning what love is.<br />You see, searching for love doesn't really end once we fall in love. Quite the opposite, love keeps revealing itself more and more as we go along. Sometimes, we lose ourselves or let our egos take control, but if we run back to love and let it rule our lives, we realize that it's being discovered every day, it's being experienced every moment. Love in a way gets a life of its own.<br /><br />Yes, that's what I am beginning to learn. In the last year or so, I made mistakes, I chickened out a few times. But I am glad I was smart enough (and lucky enough to have a patient lover) to always come back to love and let it take its course. It wasn't as easy for me as I expected, in fact, I realized, I didn't really know what love was till now. It's not what most of us think it is. It's not just chemistry and excitement. It's also about compassion, acceptance and giving. And I am only scratching the surface here.<br /><br />So I've decided to start a new series about every day love. I'll write about my thoughts and experiences on what love is, what it takes to keep it alive, and how to surrender to it.<br /><br />Mistakenly we believe that once we find love, there is nothing else we need to do. Somehow all our problems and issues will disappear, and we'll live in a perpetual bliss of romance.<br />Let me tell you, we humans also have egos and minds that like to sabotage (screw up really) whenever we feel vulnerable or giving control away. And being in love is letting yourself being vulnerable, it's about giving up control to the relationship. For most of us, especially the strong-willed ones, giving up control is not the easiest thing. Some of us have lost trust after a few disappointments. Some of us closed off completely.<br /><br />I've been a victim of my fears and doubts, I've let my ego mess with my life. But letting love in and surrendering to it in the end, was the best decision I made in my life.<br /><br />I know most people will relate to what I am sharing here. And as I am learning myself how to love and to be loved, I want to share it with you. For believe me, there is nothing more beautiful than to love and be loved. It opens our hearts, it tames our minds, and most importantly, it reveals our souls.<br /><br /> Welcome to my new series "Let Love Rule"><img src="" height="1" width="1" alt=""/>Sasha come-back<div dir="ltr" style="text-align: left;" trbidi="on">Cheers lovers,<br /><br />I know I've been avoiding my blog (and you) for months now. In few posts during that time, I'd drop a line promising to explain my disappearance in the near future. "Near future" is subjective but even I agree, in my case, it took a very long time.<br /><br />The time has come, the secret is ready to be revealed.<br />I've got a lover. Call it a partner, boy-friend, man. I personally prefer "lover" for obvious reasons:)<br />If you recall this blog's main objective was to find love. I started it 2+ years ago as a single gal, getting over a painful break-up with my 2nd love. Instead of closing my heart and throwing a key into the ocean, I decided to open it and let it guide me to find love again.<br />For 2 years, I shared with you my experiences, good or bad, mostly fun, sometimes even frisky... I dated, I ran away to Vegas, I tried many things, but mainly I was learning to be happy on my own.<br />And I did. Just last summer (July 2012), I realized something (the most) important thing in life: Love is already there, inside your heart. No need to look for it, for it will be escaping you for as long as you do. It might be strong but it's unobtrusive, it wants you to find it for yourself. It's waiting patiently and quietly. And when you do, it will whisper from your heart, it will fill your soul with love so complete that you'll never have to look outside yourself anymore. It will be your light and guide.<br /><br />So I finally came to that point in my life. I realized how loved I was, how complete I was. And it was then that I stopped looking for love from outside myself. Not long after, we met, then we fell in love, and then started a relationship.<br />So you see, I couldn't be as devoted to my blog anymore, for the objective changed. In a way, mission was accomplished.<br /><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="320" src="" width="240" /></a>I did miss my blog, for it'd become my child, the window to my soul. And I visited it now and then. But it felt different this time. I knew I had to take a new direction.<br />Naturally, a new blog idea came along. This is coming soon.<br /><br />But saying good bye turned out to be harder than I thought. We'd been together for 2 years after all.<br />So I still want to come here, I want to write, but my posts will be different. I know I'll want to share new experiences, inspire others, and most importantly, connect with my soul. <br /><br /><br /><br /><br /><div class="separator" style="clear: both; text-align: center;"></div><br /></div><img src="" height="1" width="1" alt=""/>Sasha don't people fly?<div dir="ltr" style="text-align: left;" trbidi="on">"Why don't people fly? Why can't we just get up and fly high above? I am thinking.. I want to get away from here, I want to fly high above. From everything that is here in this world. Why can't I fly like a butterfly from one flower to another. Never stay anywhere, but always going there. There. And why do we live? What is life at all? What are we doing here before we go to the other world?"<br /><br />I wrote when I was 13. I didn't remember I wrote it. It was my mom who read it to me this morning from the diary I wrote long time ago. She said she was reading it all day, and besides some surprise and maybe even fascination with what she read, she also had a "light bulb" moment. Finally I started making sense to her. It's as if after all this years, she could finally see me.<br /><br />I always knew I was different. But how did I know that I was SO different? If it took my mother this long to figure me out how long does it take others to "see" me? Most people never will. She said only now could she connect the dots and understand why I was the way I was, reading all those books when I was very young, making my own path in this world..<br /><br />Am I alone? How do I relate to others who don't understand that we are only visiting here and our lives are not ours but our souls'? Have we forgotten who we are? I have forgotten myself from time to time. But I still chose to remember. I don't want to forget. I still want to know why we can't fly... <br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="395" src="" width="400" /></a></div><br /></div><img src="" height="1" width="1" alt=""/>Sasha<div dir="ltr" style="text-align: left;" trbidi="on">Cheers lovers,<br /><br />I know it's been awhile. You've noticed that my blogging activity has greatly subsided. I'll explain why when the timing is right. Just hang in there.<br /><br />For now, here are some of my pearls of wisdom:)<br /><br />1) If a man lets a woman go, he doesn't have a place in her life.<br /><br />2) A weak man will disappear after the first challenge. A strong man will pass all tests. A true man will stay.<br /><br /><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="320" src="" width="240" /></a>3) A weakness in a man is a disguised disinterest.<br /><br />4) Bad sex should be avoided. It's a disgrace to a female body.<br /><br />5) A female orgasm is not a mystery, it's a happy discovery.<br /><br />6) Female body should be approached as a lifetime journey. It's always exciting, it's always new, it's always enticing. Above all, it really is about a journey not a destination.<br /><br />7) Don't try to hold on to someone (something) that doesn't belong to you. You are not letting the one who does enter your life.<br /><br />8) Don't be afraid to be silly. Life will thank you for that.><img src="" height="1" width="1" alt=""/>Sasha Mentality<div dir="ltr" style="text-align: left;" trbidi="on">Cheers All,<br /><br />So this week I've been a little underwhelmed by us, humans that is.<br />The whole world just stopped when Kate (yes, that's her name) had a baby to Prince William.<br />People were crying, screaming, gushing, sighing, jumping, going out of their minds. And why?<br />Really I kept asking myself why all this excitement? Many women have babies every single day. We are 7 billion and counting.. Why all this drama around this particular baby?<br /><br />At some point, I even felt betrayed for all other women and babies in the world. Seriously WTF?<br /><br />Then, I was thinking (as usual:), this basically shows how shallow our society really is. How can we be so taken by one woman who married one guy and had his baby, when half of the world is fighting for life? When some people lost their homes for good (Syria), some fighting for their right to live all their lives (Israel), some having no rights to decide even their own destiny (India, Arab world), some still being controlled and manipulated (Russia, China), some being fed food that kills them (USA, Argentina), some paying out-of-pocket hard-earned money for the ridiculously overpriced World Cup stadium and getting nothing in return (Brazil), some dying from malnourishment (Africa), etc..<br />I could go on for hours, and list every single country in the world that has a pressing issue(s) that deserves our undivided attention, if not action. Not to mention what we do to our Mother Earth.<br /><br />I know I tend to go way deep in my thinking. But seriously, is it how far we evolved as a human race, that the whole world had to stop just because some woman (who no one knew about a couple years ago) had a baby?<br />Or is it just the US that peed in its pants from joy? Well, according to other many, many sources, Canada and Europe have lost it too...<br /><br />Just in time, the Economist (my main source) released a fascinating article last week on Herd Mentality. As I was reading it, it all became to make more sense. So the article (the research conducted by scientists) proves once and for all that most people are conformist. As a human race, evolved into a society, we've embraced herd mentality more than any other quality. It proved that we go to a restaurant that has the most people in it (not necessarily the one that serves better quality food), we hire a person with more experience (as opposed to the one without but more talented). I went further still: we do what others do (not what our souls desire), we buy things that are in vogue (versus what speaks to us most), we desire people that are popular (not the ones who light our souls). Again I could go on for hours. <br /><br />So my deepest intention for the people of this world: Think for yourself, make your own decisions, react to your own passions, be yourself. Stop following everyone else. And especially in the world we live in now (when media and broadcasting get more and more useless and plain dumb), select wisely what you're watching and believing. Choose your own sources, think your own thoughts. Set example for others to do the same.<br /><br />Maybe then will we finally be able to accept our differences and let everyone strive in this world.</div><img src="" height="1" width="1" alt=""/>Sasha, Finale<div dir="ltr" style="text-align: left;" trbidi="on">If you recall, Sandy (the super storm that hit NY area back in November 2012) and I came close, almost dangerously close. It's all described in my posts about Sandy (read under the label Sandy).<br /><br />Although I wrote a lot about this experience it was never quiet finalized. I just couldn't bring myself to give it a final word, to let this dream go. It became symbolic.<br />Going back there, seeing the damage Sandy caused, packing my things in a cold dark apartment, crying my heart out, seeing how my visions about this place and my life by the beach were passing me by, like big white birds in the sky. I am not going to lie, I was devastated.<br /><br />So as soon as I settled back in the city, I tried not to think much about it. I focused on the positive. How lucky I was to find a place I loved, the place that made me and Josephine feel like home. We (especially my J) were happy. But a ghost of a shattered dream would haunt me now and then. I knew I didn't let it go completely. <br /><br />The truth is I didn't want to let it go. Dreams are like big loves to me, they are grand, they are deeply rooted. That's just the way I am.<br /><br />Needless to say, I moved on with my life and almost never shared with anyone (almost anyone:) about my internal struggle of letting it go.<br /><br />Living close to Columbia University (the first place I stayed at when coming to NYC in 2001) I certainly sensed a deja vu moment. After 12 years of living in the city, I was back to where it all began. The circle was complete. Though I was different, I was home now. This sense of belonging helped me let most of the Sandy experience go. Going to Long Beach this weekend, however, helped me let it go completely. For the first time, I didn't feel sorrow, I was able to see it once again as a cool place I go to on the weekends to get some sun and see the ocean.<br /><br />I learned many lessons from Sandy, and shared them with you in other posts. So here is my last one: Sometimes dreams don't come true. But life goes on, and we will always be where we need to be even if we don't see it yet.<br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="320" src="" width="240" /></a></div><br />The most important moment was when I realized and accepted that it wasn't my time yet to part with New York. We had more things we needed to do, great things. And New York had never been so clear and open with me as this time. It took me back with open arms, and gave the best gift yet.<br /><br /><br /></div><img src="" height="1" width="1" alt=""/>Sasha<div dir="ltr" style="text-align: left;" trbidi="on">Happy rainy Monday, my New Yorkers.<br /><br />I was visited by my beloved quote muse this morning. Always a pleasant surprise. Here's what she whispered in my ear or should I say, to my soul:)<br /><br />1) How can we believe we know someone, when it takes a lifetime to learn about ourselves.<br /><br />2) First we need to understand ourselves to be able to understand the other.<br /><br />3) It's accepting uncertainty and ever-changing nature that will help us go through life peacefully.<br /><br />4) There is no such a thing as right or wrong. Everything is just an attempt to find ourselves.<br /><br />5) Love starts with oneself, and the more we become the person we love the more possible it is to truly love another. <br /><br />6) The biggest problem of all relationships is Ego. And that's what we need to tame first to have a happy one.<br /><br />7) Ego is one of few things (if not the only) that make us richer when we lose it.<br /><br />Love to all, and as always wishing you find love within first.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="" width="400" /></a></div><br /></div><img src="" height="1" width="1" alt=""/>Sasha Most Important Thing<div dir="ltr" style="text-align: left;" trbidi="on">Nite lovers,<br /><br />So it just happened that today I did some heavy thinking. Yes, it happens more than I'd like to admit, but let me get away with this one as a rare occurrence :) just for once please...<br />Well, I was thinking what is the most important thing to me in this life? What is the only thing I'd need if the world turned upside down, and there was nothing else to hold on to? What would be the only thing without which life no longer made sense?<br />And in my mind, the answer was so clear, so obvious.<br />I just want to always stay true to myself. Yes, that simple.<br />As I looked back at my life, I realized that the most important thing all my life was to be able to be authentic. It was only during those moments when I wasn't when I'd lose my way, I'd become doubtful, I'd make mistakes.<br /><br />Then I was thinking whatever happened in my life as long as I was being true to myself, I'd always get through it. It'd all work out for the best, it'd all make sense in the end.<br />And I don't mean just challenging times, although those require our authenticity the most. I also mean the best times, when we are happy and everything seems to be going our way. Cause even then, we know that it's all fleeting, and there are hard times lying ahead of us. Everything in life is really just a phase we are going through.<br /><br />Life is not easy. Don't believe whoever says it is. But we can make it worthwhile, and the only way to do it is to stay true to ourselves. Just think about it, from day 1 on this Earth to the very last day, the only person who'll always be there for you is really You. People come and go, things happen, but you always stay. By being true to yourself you choose the highest road. You choose what's best for you, and ultimately best for those around. And the opposite, not being true to yourself, you let things happen to you, let others take control over your life. And life is nothing if it's not one's life.<br /><br />I know it's some heavy thinking for a late Friday nite. But don't disregard it, cause this is probably the only most important thing we need to learn in this life.<br /><br /></div><img src="" height="1" width="1" alt=""/>Sasha character<div dir="ltr" style="text-align: left;" trbidi="on">Cheers All!<br /><br />Although I am generally happy with my personal development and the way it's going, there are still some areas where I have challenges. Not that I couldn't deal with them, I just never really wanted to or tried to. For some reason, I gave myself a slack on them. Kind of like using my "I'm a human, I make mistakes too" card.<br /><br />And honestly, it seemed all normal to me to flip out or lose my cool over certain situations. They would pass, and I would be back to my "highly spiritual" self. So they didn't seem like a big deal, except for the annoyance they caused in my life.<br /><br />So since I got back from Hawaii (where I was at complete bliss with myself) I had been doing great. But it was easy. Everything was going smoothly, I had no problems.<br />However a couple of days ago, I was presented with one of those situations that would flip me out. Nothing serious, just inconvenience or annoyance with others (let's leave it at that).<br />On autopilot I lost my cool. And you know what happens when you lose your cool? I think it goes the same way for everyone. Other things fall out of order, more irritating issues come up, and you get angrier and angrier, to the point you are ready for a week long yoga retreat...<br />So for a day and a half, I was a madman (well a madwoman in this case, but who wants to associate madness with a woman?:).<br />And then I started thinking crazy stuff about other things in my life, cause that's what happens when you're not at peace inside. And let me tell you, our minds will jump on it, and go on a wild ride that is very risky and hard to stop.<br />I had to do something quickly. So yesterday after yoga, meditation, reflection, etc. I realized I didn't have to react to those irritating situations this way. Basically, I'd look at them from the outside, and keep my emotions completely at bay. Just observe them, if you will.<br />As soon as I realized it, my mind stopped racing, my heart slowed down, and I was able to smile again. But it's not all, it gets better. Then I realized, I didn't actually do anything wrong, so there was no reason for me to be so hard on myself and go through unnecessary stress. I don't have to please everyone, especially because pleasing someone is usually driven by our ego. If we are true to ourselves and respectful of others, there should not be a situation where we need to go out of our way to please someone. It serves purpose to no one.<br /><br />So happy to admit that this time, I finally got it. And what a relief, for those situations will present themselves now and then. It's life. But it takes a real character to remain calm and collected, and most importantly, stay true to yourself.<br /> Love to All!<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="" width="400" /></a></div><br /></div><img src="" height="1" width="1" alt=""/>Sasha I learned in Hawaii, II<div dir="ltr" style="text-align: left;" trbidi="on">Nite lovers,<br /><br />My trip to Hawaii was so amazing that one post about all lessons learned just wasn't enough... So here is part deux!<br /><br />I've been back in NYC for about a week now, and even to my own surprise, haven't lost my vacay zen yet. Seriously! I haven't yet got mad at people who suddenly decide to stop on a busy Manhattan street, or those who move at a snail's pace and don't let you sprint at your normal new york speed, or the morning subway ride... Yes, I am very tolerant of "mere mortals" these days.<br /><br />Based on what's been said, here is the 1st lesson:<br /><br />1) Sometimes we just need to get away to find ourselves again.<br /><br />What I realize every time I get out of NYC, is how misleading (if not deceiving) my "NYC persona" can be. Let me explain. Being a happy and trusting, and loving (blah, blah, blah) person at heart, I do tend to toughen up in New York. It's just the way you live/survive in the city. Yes, I can even be mean (believe me!) if I have to. Though let me assure you, never bitchy, not a drop!<br />But inside I am sweet, super sweet, and with all honesty, hate being mean. So once I am out of the "concrete jungle", and don't have a need for survival, I let my true self be. It's really magic, takes about 3 days out of the city, and I am the happiest, lovable person you'll ever meet.<br />Being in Hawaii, a place of lovers and friends, I was able to let my loving authentic self come out and shine! And God, how nice and easy it is.<br />Once again, I realized how important it is to get out of the city just to let your most authentic self be.<br /><br />2) Don't be afraid to travel alone. Be open, be yourself, and let events unfold naturally.<br /><br />The 2nd part of my trip to Maui was solo. Never having traveled alone before, I was a little anxious in the beginning. Felt lonely and vulnerable the first night, even cried like a little child on her first day of school. After a long phone conversation with my man I was able to calm down and set my mind on making the best of it. In a matter of 24 hours, I met 2 most amazing ladies from Seattle with whom I connected on so many levels, and had the most amazing time for the following 3 days. If I hadn't been alone and open to meet new people, I would have never met them.<br /><br />Traveling by yourself is exciting. That's when your true inner self feels free and safe to come out. On top of that, you learn more about yourself than in any other situation, you experience yourself on a whole new level. There is nothing like it, believe me.<br />I was happy to learn that my true inner self attracts people that inspire me. In this case, strong loving women, who are true to themselves and lead fulfilling lives. Terri and Jen (the ladies I met) run marathons (yes, 26 miles and all), triathlons, travel around the world, have successful careers, have great friends, eat healthy diets, inspire others, and so much more.<br /><br />I would advise anyone to experience traveling alone.<br /><br />3) Being away will give you a clear perspective on where you're standing in your life.<br /><br />The goal of this trip was to get away from it all and find inspiration. I started feeling unfulfilled and wanted to find some new directions on what to do. As if something was missing, and keeping me away from fulfilling my destiny.<br /><br />Being away, I realized what a good place I am in my life right now. Even if it's not the most creative and fulfilling life just yet, I am on my way! But right now, I am blessed to have a loving circle of people in my life, parents, lover, friends, Josephine, colleagues. I am blessed to be independent and free to make my own choices. And blessings go on and on.<br />So in the end, I was relieved to realize it's all good, life is good. Coming back to NYC was easy this time. I was happy and excited to come back to someone I deeply love.<br /><br />As you see, it was one amazing trip for me. The trip that keeps on giving, and will stay with me for the rest of my life.<br />I would encourage all of you to travel with an open mind and heart.<br /><br />Aloha to /><br /></div><img src="" height="1" width="1" alt=""/>Sasha Paddling<div dir="ltr" style="text-align: left;" trbidi="on">Aloha!<br /><br />As some of you know I'm in Hawaii right now. That's right, finally I am taking my well-deserved, extremely needed vacation. I've been here for about a week, and have been able to relax but also learn many different lessons that I know will carry me through life going forward.<br /><br />Today, I want to share some of the lessons I learned from surfing. Sadly, I didn't have a chance to surf a lot (next time for sure!). Still, the experience was so amazing that only few in life could be compared.<br /><br />Lessons from surfing:<br /><br />1) Don't let fear stop you, keep paddling.<br /><br />First, I was a little scared. I don't believe whoever says it's the easiest thing in the world. It is not. It takes courage and faith. Courage to get in the cool deep water with waves. Faith in nature and the Universe to be cooperative.<br />I had butterflies in my tummy, but I went in. Butterflies turned into a healthy adrenaline and I got excited.<br />Same thing with life. We let fear stop us from living full, enriching life. And what for? Fear stops us from having the most amazing experiences in life.<br />From now on, I'll do what I did with surfing: I'll go in the water despite any fear.<br /><br />2) Catch a wave, take chances.<br /><br />Catching a wave is the most challenging part, and not every one will come to fruition. But not catching it is not worth trying. Again, same thing in life, what is life if we don't take chances? Go for it, make your move!<br /><br />3) You will fall. This is part of learning.<br /><br />To surf is to learn how to fall. Let me add, how to Graciously fall. It's inevitable. But once you fall a few times, you look at it differently. You get comfortable falling. Then catching a wave is so much easier, for you know how to fall if you have to.<br />In life, so what if we fail or don't succeed on a first or 2nd, or 3rd try? We are just building resistance, we are learning. It gets easier, as we get experienced.<br /><br />4) Even if you fall, the board is there to get you up again.<br /><br />As long as you're attached to the board, it's right there for you to climb on.<br />In life, no matter how lost or desperate we might feel at some moments, there is always something that gives us support, that gives us strength. Always. Even if it's just faith. But we are never left alone.<br /><br />5) You will succeed if you stand strong and find your balance.<br /><br />Once you're riding a wave, you need to get up and find your balance.<br />Couldn't be any more relevant to life. To have a fulfilling life we need to stand strong, we need to have balance.<br /><br />And last but most fun:) Surfing is the closest thing to an orgasm. 'Nuff said!<br /><br />Aloha to all! /></div><img src="" height="1" width="1" alt=""/>Sasha<div dir="ltr" style="text-align: left;" trbidi="on">Cheers all,<br><br>Have some update I wanted to share (just in case you were wondering what the hell is going on with me again:)<br>It's been busy, and busy! But tonite</div><div dir="ltr" style="text-align: left;" trbidi="on">i will kiss the city good bye and on to a tropical island till next weekend. I will surely write from there, as I see a lot of time availability. No seriously, it promises to be a nice and leisurely vacay. Exactly what my mind and body ordered!</div><div dir="ltr" style="text-align: left;" trbidi="on"><br>Stay tuned, I have so much to share...<br><br>Love to all.><br></div><img src="" height="1" width="1" alt=""/>Sasha
http://feeds.feedburner.com/blogspot/rGfwL
CC-MAIN-2018-30
refinedweb
12,246
73.07
<p><em>This email was sent Feb 5 2011 to both the contributors and fw-general mailinglist</em></p> <p>Howdy!</p> <p>Since 2007 the Zend Framework project has evolved based on proposals.<br /> The proposal process allows each community member to contribute to the<br /> Zend Framework, while ensuring that everybody from the community has a<br /> say in what the component that is being proposed should look like, and<br /> whether or not they think it can be included into the framework.</p> <p>We, the Zend Framework Community Review Team (just call it cr-team),<br /> however notices that there are a lot of old, abandoned proposals, and<br /> recognize this diverts attention from that proposals that are actually<br /> worth a look. Therefore we have decided to archive all the seemingly<br /> abandoned proposals and have set the gauge on four months.</p> <p>If your proposal is archived while you're still planning on putting it<br /> forward soon, you can feel free to restore it. This can be done by<br /> logging in to the wiki, edit the page (under 'page operations'), and<br /> drag the respective page to the category it should be in. This usually<br /> will be 'Ready for review' or 'New'.</p> <p>Just as a reminder, the proposal process works as follows (guess what,<br /> it's just four simple steps <ac:emoticon ac:):<br /> 1. You think of a great, new, shiny component.<br /> 2. You write a proposal, as long as you aren't done with that you can<br /> save it in the category 'new' (default).<br /> 3. Once the proposal is ready, you can move it to 'ready for review'<br /> and announce it to the community on the mailinglists and irc<br /> (#zftalk.dev - <a class="external-link" href=""></a>).<br /> 4. When that is done you can move the proposal to 'ready for<br /> recommendation', and contact us. We will then review it. After that's<br /> done, it can usually be moved to the Zend Framework Incubator,<br /> followed by trunk (don't worry, we'll guide you through that!).</p> <p>For more details see <a class="external-link" href=""></a></p> <p>Please be aware that your component, if accepted, will have to meet<br /> the ZF 2 coding standards before it can be moved to trunk. This<br /> primarily means you should use namespaces, and may have to rename some<br /> classes.</p> <p>If you have any questions based on this email or otherwise intended<br /> for the CR-Team, please send an email to zf-crteam@zend.com .</p> <p>Kind regards,</p> <p>On behalf of the CR-Team,</p> <p>Dolf Schimmel<br /> – Freeaqingme</p> <p>P.s. this e-mail was cc'ed to fw-general because not everybody may be<br /> subscribed to zf-contributors.</p>
http://framework.zend.com/wiki/display/ZFDEV/Archiving+of+abandoned+proposals+(Feb+5+2011)
CC-MAIN-2014-42
refinedweb
484
60.55
Important: Please read the Qt Code of Conduct - How to set the Mouse Area for certain part of the image? For example i have am image of 400x400 but the rectangle in that image is only 100x100, i just want to set mouse area for that rectangle. how can i get it done in QT/QML? Just have the MouseArea as a child of the image, and give the MouseArea a width and height of 100. If you need to place the MouseArea somewhere else in the image besides the top-left corner, use the x and y property of the MouseArea. The example below draws a 400x400 blue rectangle. I have placed a 100x100 MouseArea in the top right corner. When you click on it, it changes the color of the rectangle. In your case, just replace the Rectangle with Image Rectangle { width: 400 height: 400 color: "blue" MouseArea { x: 300 y: 0 width: 100 height: 100 onClicked: parent.color = Qt.hsva(Math.random(), 1, 0.7, 1) } } Thank You @stcorp but, what if i have an arc in an image? and would want to set the mouse area? because the requirement is to arrange the arcs to form a circle, and on clicking an individual arc i should perform certain action. but on arranging the arcs the images are overlapping. - p3c0 Moderators last edited by @Nisha_R May be what you are look for is the masked mousearea. It is under <QtDir>/Examples/Qt-5.7/quick/customitems/maskedmousearea I'm not sure how to do this myself, but take a look at this, it seems like it could be a solution for you: Edit: Oops, seems p3c0 already posted that solution while I was still searching for it :P You'll need a little bit of C++ to get this working. First, drop the maskedmousearea.cpp and maskedmousarea.h files into your project directory. Then in QtCreator, right click on your project and click "Add Existing Files..." and add these two files to your project. Next you need to make some edits to your main.cpp file. In the includes, include the following: #include "maskedmousearea.h" Now register this new type with the QML so you can use it in your QML code. This is done by pasting the following line into the main function of main.cpp (near the beginning of the function. It has to be placed before your QML is loaded): qmlRegisterType<MaskedMouseArea>("Example", 1, 0, "MaskedMouseArea"); Now it is ready to be used in your QML. Import it using the following: import Example 1.0 And then create the MaskedMouseArea using the following MaskedMouseArea { id: maskedmousearea anchors.fill: parent alphaThreshold: 0.4 maskSource: parentimage.source } But you should really have a look at the example if you want to see more about how it is implemented
https://forum.qt.io/topic/75960/how-to-set-the-mouse-area-for-certain-part-of-the-image
CC-MAIN-2020-50
refinedweb
472
74.69
Talk:Key:crossing Contents - 1 Old "Current usage" - 2 Complex intersections - 3 traffic_signals and uncontrolled vs island - 4 Crossing a dual carriage way - 5 How to represent crossings that go under or over a road? - 6 Puffin Crossing - 7 UK specific - 8 button operated traffic signals - 9 bicycle oneway - 10 Tactile / audible signals - 11 Island *or* Traffic signals ? - 12 highway=traffic_signals instead of highway=crossing? - 13 Crossing Ways - 14 Use British English - 15 crossing=no - 16 Proposal: make "crossing_ref=zebra" less UK centric. - 17 Proposal: crossing_ref=driveway - 18 Add an example with cyclist only crossing - 19 Node or line - 20 Proposal: signed=no (was =yes) - 21 Levels of sidewalk (and perhaps cycleway) and carriageway - 22 Do not use crossing=unmarked - 23 Island? - 24 "traffic_signals" - 25 Key:crossing - 26 "crossing=no" confusion - 27 Crossings at intersections - 28 No unmarked crosswalk Old "Current usage" (Pasted here from the main page with modifications while we migrate) Former/interim tag usage While this proposal was being discussed, a hybrid scheme was employed whereby crossing=<uk-specific-name> was used in parallel with the new values. Because UK-specific names are unintelligible to non-UK-based mappers, the UK-specific 'shortcuts' are now deprecated. The table below is retained while we migrate to the new docs on the main page. The table below and its commentary was based on the values from the planet file, snapshotted during the evolution of the current proposal. - Crossing=puffin, crossing=pegasus are missing from the list. Crossing=traffic_signals can be used where the precise type is not known. Crossing=uncontrolled to me would be a crossing of an unspecified type due to the lack of features. Crossing=island is rubbish. If there's an island, then there are at least 2 crossings. One crosses highways, not islands. - However, it might be better for editors (like iD) to use the type merely to add correct subtags to a global standard from UK specific. Toucan would add traffic_signals=yes and cycle=yes. Pegasus would add horse=yes and traffic signals=yes etc. etc. There were a small number of other values making up around 2% of the total at the time of writing. These were not, however, widespread enough to be documented. Please note that there is nothing stopping you from using a different tag value if it suits your tagging problem. However, please consider using tag values from the current approved proposal first, and always document any custom tags you use in accordance with OSM Good practice. I'd propose a discussion as to whether type detail is really necessary. As has been mentioned elsewhere is the fact that a footway crosses a highway sufficient to denote a crossing ? (= crossing=unmarked). If the crossing has traffic lights, why not simply mark highway=traffic_signals at the crossing rather than specifying crossing=traffic_signals. The fact a footway crosses a highway is again sufficient to denote a crossing without replicating the fact ? I don't really think we need to distinguish between all the different types of crossing. For the driver, the traffic lights are significant, that's all. If it's a pegasus crossing, it'd be shown as a bridleway crossing a highway. Ditto Toucans being shown by cycleway crossing a highway. Pelicans and puffins need no distinction. If 'uncontrolled' = not traffic signals, then specifying no signals is all that's needed. There may be value in crossing=marked for a Zebra. We do need to decide which details are necessary/useful and which are pure geekism. --pmailkeey 2016:3:28:16:46 - I'm not that familiar with all these UK-animal-types. However, it could be important for a routing engine to distinguish between a traffic light at a road junction and at a pedestrian crossing, as they might have different average waiting times. Zebra stripes are quite universal worldwide, and in many countries give special rights to the pedestrian, I'm quite happy OsmAnd announces them for me. Often a use for a tag comes once enough have been mapped, so it does not hurt to record such details. --Polarbear w (talk) 19:09, 28 March 2016 (UTC) - Waiting times is an interesting one. I don't know whether routers take this into consideration although I doubt it as the overall delays to journeys will be small compared with other influences. Certainly in the UK lights timings are set on a per set basis and one type of crossing in one place may not be the same as the same type elsewhere. Also, many traffic lights in the UK now have variable timings - so the timing at one set will vary depending on how busy the roads are at the time. The idea of a global system to try to work on this data accurately is going to be very difficult and not a great deal of value. And that's back to the question of whether such detail as crossing type is actually of suitable value. --pmailkeey 2016:3:28:21:19 Complex intersections what about an intersection that has traffic signals to control vehicles, uses the same traffic signals to control two crossings, has another crossing with separate pedestrian traffic singal and has no crossing on the fourth incoming road ? can (and should) this be properly tagged ? --Richlv 12:14, 27 August 2008 (UTC) - Maybe use more than one node for the cluster of crossings. If you're talking about a crossroads, I might do that as a central highway=traffic_signals node where the roads join surrounded by three highway=crossing | crossing=traffic_signals, one on each incoming way which has a crossing. Another way would be to use four highway=traffic_signals nodes, some of which have crossings and some of which do not. IIRC there's a proposal out there for using relations to link up fancy synchronised sets of traffic lights. --achadwick 01:38, 4 September 2008 (UTC) traffic_signals and uncontrolled vs island according to the page, traffic_signals, uncontrolled and island are all parameters to a single 'crossing' tag. so how are we supposed to mark crossing with traffic lights & an island ? the road itself is not separated for driving directions. ok, i'ts in faq (but mentioned virtually nowhere else...) - multiple values for single tag. - This proposal should help: Proposed_features/Traffic_island Crossing a dual carriage way Where there is a road with two carriage ways, how should one join to the two crossing nodes? This should be done to ensure that the routers can link the sidewalks of the two ways. --Mungewell 05:40, 25 September 2008 (UTC) - Link both nodes with a footway/path. Example: --Cbm 06:13, 27 September 2008 (UTC) How to represent crossings that go under or over a road? A "crossing" is when two ways meet at the same level. If they do not have the same level, it's not a crossing, but a bridge or a tunnel. Puffin Crossing It appears that most crossing round here are now puffin crossings. I think we should start marking them up is different from pelican crossings using crossing_ref=puffin. --Pobice 23:09, 7 March 2009 (UTC) - an additional tag like push_activation=yes should solve this capturing much better. - Also See: Talk:Key:crossing#New_Value_for_.22button-activated.22_crossings --Cbm 14:47, 9 March 2009 (UTC) - The recommended tag is button_operated=yes/no Lulu-Ann UK specific Can anybody explain why some crossing types are marked as UK specific? I know all types of crossings in Germany too, including the pegasus crossing? Maybe we can clarify that the terms are UK English, not the crossings. --Lulu-Ann 11:56, 28 April 2009 (UTC) - You are right, so I removed the "UK specific" in the descriptions. (The shortcuts and names are still UK specific, of course, which should be clear from the table headings and the "('traditional' UK crossing name)" info.) --Tordanik 12:34, 28 April 2009 (UTC) I'd like a tag to express that pedestrians have buttons to request crossing the street at a crossing=traffic_lights crossing. Spontaneously, I'd suggest button_operated=yes/[no] for this. Opinions? --Tordanik 13:55, 8 August 2009 (UTC) - There is an "on_demand=yes" used somewhere else, as far as I remember. We can reuse it. --Lulu-Ann 14:27, 11 August 2009 (UTC) - on_demand is a bit to vague, imo. It could refer to other technologies, such as inductive loops for cars or cameras observing (pedestrian) traffic. It should be possible to distinguish between the different types to identify "demand". --Tordanik 16:15, 11 August 2009 (UTC) - Sure, then let's use on_demand=yes/button/induction/camera/whatever. --Lulu-Ann 17:26, 11 August 2009 (UTC) - There aren't many uses of it yet (tagstat:on_demand). I'd prefer crossing:activation=<whatever>, but that's just me. - Please can we establish some implications such as crossing=uncontrolled → crossing:activation=no or crossing=traffic_signals → crossing:activation=button while we're about it? --achadwick 11:39, 7 October 2009 (UTC) - Implications should be avoided unless we can be sure that almost everyone takes them for granted without looking them up. "crossing=traffic_signals → crossing:activation=button" is therefore a bad idea, there are lots of traffic signal controlled crossings without buttons, and I don't see why the version with buttons should somehow be the default. crossing=uncontrolled → crossing:activation=no is obvious, so I don't mind if it is documented explicitly. --Tordanik 12:52, 7 October 2009 (UTC) - Do we need to distinguish between pedestrian activation (crossing:activation=*) and road-traffic activation (traffic_signals:activation=*), or am I just overthinking it? --achadwick 11:39, 7 October 2009 (UTC) - Yes, we do ! --User:Skyper 12:19, 6 November 2009 (UTC) - There are crossings around here on busy streets which are purely timer-activated, but which still have a placebo button to help keep pedestrians in order. How to represent that? --achadwick 11:39, 7 October 2009 (UTC) - We have traffic signal crossing with buttons, where the button only works (roughly) in the night time - at other times the indicator light for that button stays lit all the time (or at least lits up the moment the pedestrian signals turn red). button_operated=Mo-Su 22:00-06:00? Alv 14:10, 17 October 2010 (BST) bicycle oneway How can I tag a crossing where bicycles are only allow in one direction ? In Germany there are 3 different lights (one for pedestrians, one for bicycles and one for both). Sometimes you have different lights for different directions. By law you are only allow to push your bike while walking on a footpath but not to drive. --Skyper 12:34, 23 September 2009 (UTC) - You could use a oneway-tag for the cycleway or path. --KartoGrapHiti 20:31, 3 November 2009 (UTC) - works for a line of crossing but not for a single one ! --Skyper 11:28, 5 November 2009 (UTC) - If you have a crossing, you have a cycleway connecting to/departing from the road it crosses. Split that way if necessary, and apply the oneway tag. If that cycleway hasn't been drawn, then draw it. Alv 11:44, 5 November 2009 (UTC) - I think this is a bit more complicated ! I have a highway=secondary with footway=both and cycleway=track. It is crossing a highway=residential with traffic_signals. There is no seperated way ! --Skyper 13:09, 5 November 2009 (UTC) Tactile / audible signals Lulu-Ann would like to introduce: - traffic_signals:sound=Yes/No - To indicate wether a sound signal is available to help blind persons to detect the greenlight. - traffic_signals:vibration=Yes/No - To indicate wether a vibration signal is available to help blind persons to detect the greenlight. achadwick would like it instead as: - crossing:sound=yes - Audible crossing signal. The crossing emits a sound to indicates to blind or partially-sighted persons that it is safe to cross. The exact details are necessarily country-specific. Examples: US "Walk" recordings, UK-style beeping pelican crossings, German clicking or beeping traffic lights, Japanese folk melodies! - crossing:vibration=yes - Tactile crossing signal. The crossing emits vibrations to indicate to deaf-blind persons or others with poor sight that it is safe to cross. The exact details are necessarily country-specific. Examples: vibrating cones near the activation button of some UK pelican crossings (picture), vibrating buttons (Ireland, according to Wikipedia). My primary reason for changing the suggested namespace is that the signal is intended for highway=crossing users, not highway=traffic_signals users; thus it make sense to extend crossing and not traffic_signals. Note that the crossing tagging scheme can start with either; nevertheless for something intended to benefit road users crossing the road, crossing:* makes more sense. It's also shorter to type. We don't have consensus yet, so state your views! --achadwick 11:01, 7 October 2009 (UTC) - My proposal refer to properties a set of traffic lights can have, not a crossing. You need a "walk"/"greenlight" period, that a zebra crossing just doesn't have. It fits to traffic_signals:on_demand=yes/no. See [2] (beta status] for rendering. --Lulu-Ann 11:18, 7 October 2009 (UTC) - A crossing tagged highway=crossing crossing=traffic_signals isn’t a zebra crossing. --Wynndale 12:13, 7 October 2009 (UTC) - I think these tags refere to crossing cause it is crossing=traffic_signals and not highway=traffic_signals --Skyper 12:28 6 November 2009 (UTC) - I think that a zebra crossing does not have signalling, I think an island does not have signalling, I think an uncontrolled crossing does not have signalling. I do think a toucan crossing and pelican crossings are pedestrian traffic signals. So if it only applies to traffic signals, why have sound and vibration information at "crossing" ? Having sound and vibration is simply an attribute of a traffic signal pole, not of a crossing. --Lulu-Ann 18:23, 9 February 2010 (UTC) Island *or* Traffic signals ? crossing=island and crossing=traffic_signals is not a contradiction. I don't want to tag crossing=traffic_signals;island or similar for each bigger crossing. --Lulu-Ann 13:44, 7 October 2009 (UTC) - I completely agree - I've encountered that problem, too. My suggestion is defining a new yes/(no) tag for islands, crossing:island=yes might be a good choice. Of course, crossing=island would become rather useless, it would be synonymous to crossing=uncontrolled + crossing:island=yes. --Tordanik 18:55, 7 October 2009 (UTC) - I started Proposed_features/Traffic_island to fix this issue. --Lulu-Ann 18:32, 8 April 2010 (UTC) - I have come to the conclusion that the best way to tag a traffic island is splitting the street. That gives you the chance to tag things where they are: sloped curbs, tactile paving, pedestrian traffic signals that are button operated... and also routing software will not try to have a car do a u-turn over a long traffic island. Lulu-Ann - I admit that splitting the street is the only way to tag really complicated situations with lots of detail. I still think that crossing=traffic_signals + traffic_island=yes is better than crossing=traffic_signals;island, though. This isn't really affected by the fact that there is a more flexible (but more complex) solution - if a mappers do use one of the two one-node solutions, they should pick the one without semicolons. --Tordanik 14:29, 27 August 2010 (BST) - IMHO this is still an issue. In StreetComplete we (speaking for the community here) tried to include the crossing island in a quest. However, the tagging does not work reliably in the way it is currently done, because you have no way to specify "there is no traffic isle". When the "island" tag is not included, you can only guess whether a mapper did not check it yet or whether there is indeed no island. And an app, can of course not do this… - Probably a separate key (or crossing:island=yes/no) or so, is needed to address this problem. If that could be made happen, it would be nice. See for details/discussion. --rugk (talk) 22:24, 18 August 2017 (UTC) - I aded description of this issues on a wiki Mateusz Konieczny (talk) 09:09, 29 October 2017 (UTC) highway=traffic_signals instead of highway=crossing? I've always mapped crossings as highway=crossing regardless of what different features it has. I think the current documented examples are quite clear, which I guess is good, but since when was a crossing not a highway=crossing? Some time back in the August 2008 the examples on this page got changed to list most of them has highway=traffic_signals instead. -- Harry Wood 11:37, 26 October 2009 (UTC) - I've used traffic signals when combing the crossing with the marking of traffic lights (controlling the road traffic) on said crossing (or if traffic lights at junctions also include crossings). I suppose an alternative is real micro mapping and mapping both. --Pobice 15:45, 26 October 2009 (UTC) there are different combinations possible: - highway=traffic_signals + crossing=no - highway=traffic_signals + crossing=traffic_signals - highway=crossing + crossing=traffic_signals - highway=crossing + crossing=yes/island/zebra... so it's the combination of highway and crossing that can specify each crossing best. highway is for the user "on the road" crossing is your user "crossing the road" --Cbm 15:45, 6 November 2009 (UTC) Crossing Ways What's the reasoning behind restricting crossings to being nodes? Any reasons you shouldn't use, say, a way and a node when working from the footpath perspective? It feels misleading to draw and label a regular footpath over 99% of the crossing and label just the node as the crossing. - Oh God. What a mess OSM is - which is understandable. OSM comprises two maps, a geographic one and a navigational one. A crossing appears on the navigational map - and navigational map features have no size - so to say 99% of anything on the navigational map is an indication of the failure to understand the mapping. The geographical map features are all areas (I think !) while the navigational map uses lines ('ways') and points (nodes). So crossings are a navigational feature and are nodes. -- pmailkeey Use British English "Curb" is AE, in BE use "kerb". Please change this in the whole wiki where you can find it for consistency reasons. -- Dieterdreist 12:17, 21 September 2010 (BST) - Thanks for that. I've updated the page to remove the discussion and add some annotations. As for "the whole wiki" - well, I'm not fussed about which variety of English is used on the English pages, but if it bothers you, please feel free to update it yourself. This en_GB speaker can cope with the occasional en_US word; the two languages are very similar. Tag values should stay as they are though, since software may use already use it and would also have to be updated. --achadwick 20:32, 11 November 2010 (UTC) crossing=no Is it intended that the 'crossing=no' tag should be applied to a way, rather than to a node? If so then this needs to stated clearly in the article. This is certainly the approach that makes most sense to me and would indicate that pedestrians should be guided to only cross the road at points where there are formal crossing as tagged on appropriate nodes along the way. Pedestrian routing software to routing people along the pavement on one side of the road to the nearest crossing point and then back along the far pavement to the destination. The inability to cross the road at other points may be due to the presence of barriers between the pavement and the road, or by signage, or possibly just due to the volume of traffic and number of lanes along the road (although that then becomes subject to the judgement of the mapper which is never ideal). -- PeterIto 02:43, 13 June 2012 (BST) - Both should make sense, nodes and ways, somewhat depending on whether the sidewalks are already drawn as separate ways. What I've come across several times (but can't find one right now), is a case where a footway (or even a highway=service) comes from a apartment building lot and connects with the road and there's a roughly similar way that departs from the other side of the road at the same spot. However, there's a signposted crossing "nearby" so pedestrians must use the crossing and could, in theory anyway, be fined for crossing at the point where the ways meet - at least if that were to lead to a traffic accident. (For "nearby", here we only know that 30 meters or less is legally nearby, but we don't know how far a crossing starts being "not nearby"). Alv 05:54, 13 June 2012 (BST) Proposal: make "crossing_ref=zebra" less UK centric. Zebra crossing is appearing not only in UK - see Therefore I propose to change "A crossing with no traffic lights, but with flashing amber globes on poles and distinctive white stripes on the road for pedestrians. (not button operated)" to "A crossing marked with distinctive white stripes on the road". New UK-specific tag may be introduced, but it makes no sense to use term popular worldwide for narrow UK specific meaning. Bulwersator (talk) 20:29, 29 October 2013 (UTC) - I think that we shouldn't change current meaning of heavily used tag. crossing_ref=zebra is quite well defined, high level tag describing type of crossing (for pedestrians, stripes, no lights). If there's a need to tag just marking of crossing (existence of stripes) we should introduce new tag for that. Eg. traffic_sign=zebra or traffic_sign=stripes or traffic_sign=PL:P-10. Royas (talk) 21:22, 6 November 2013 (UTC) - The problem is that it is also frequently used outside of the UK and it takes name that is not indicating that it is something UK-specific. Bulwersator (talk) 20:37, 8 November 2013 (UTC) Proposal: crossing_ref=driveway - Isn't that "crossing" just a part of the sidewalk, where motor vehicles occasionally happen to cross the sidewalk - it has even stronger protection for the pedestrian than what a marked pedestrian crossing provides. I wouldn't tag it as a highway=crossing, even, but only with the crossing_ref=driveway. Alv (talk) 07:14, 30 October 2013 (UTC) - I am unsure about dropping highway=crossing - this one may seem obvious but border is quite fluid (some very similar driveway are leading to factories/warehouses and carry significant traffic) Bulwersator (talk) 12:02, 30 October 2013 (UTC) - "I wouldn't tag it as a highway=crossing" - you may want to mention this on - without highway=crossing or crossing=no any road crossing with footway will be reported by validator as a problem in the next version of JOSM Bulwersator (talk) 12:02, 30 October 2013 (UTC) - This isn't a crossing at all. The driveway forms a T intersection with the highway with a sidewalk. It's the same at any other intersection where sidewalks exist. Some such situations do have crossings such as Zebra and some don't. Those that don't are not marked at all and this is how it should be with a driveway - not marked. --pmailkeey Add an example with cyclist only crossing I have no idea for a a good value here, any ideas?. Bulwersator (talk) 21:43, 29 October 2013 (UTC) - IMO it's no different from an uncontrolled pedestrian crossing in the sense of identifying attributes, except that it allows cyclists. Thus crossing=uncontrolled, or actually in the photo that's crossing=traffic_signals. Alv (talk) 07:20, 30 October 2013 (UTC) - OK, I drop "Proposal: new crossing_ref value for type of bicycle crossing in Poland" idea. But as there is confusion how to tag cyclist only crossing so I still suggest to add this an an example Bulwersator (talk) 18:55, 10 November 2013 (UTC) Node or line coord = 53.071303 N, 8.801245 E The infobox told that the crossing-tag ought to be used for nodes, only. On the other hand is is recommended to draw the bicycle or foot passage of the crossing as a line. If it is a line, there is no reason to use the tags differently from those for bridges. - I guess bridges and tunnels should also be nodes ! However that would make it unclear which way went over the bridge and which went in the tunnel. Instead of tagging the line cycleway=crossing and then two, tree or even four nodes highway=crossing + bicycle=yes + crossing=traffic_signals, it is more convincing to tag the line from the cycleway on one side of the road to that one on the other side highway=cycleway + crossing=traffic_signals --Ulamm (talk) 14:21, 28 August 2014 (UTC) - I'm don't think "bridge-like" tagging should be used. When there is a physical separation between parts of a road (the requirement for having multiple ways) then there are usually also islands for the crossing pedestrians that make the situation more similar to multiple small crossings. This seems to be the case for the "Bismarckstr x Bennigsenstr" crossroads, for example. --Tordanik 22:49, 1 September 2014 (UTC) - For the "Bismarckstr x Bennigsenstr" crossing that is true. - But in "Concordia" crossroads, some cycleway crossings and footway crossings cross one carriageway-line and two tram tracks without an intermediate island. - In Bremen, there are several traffic signal crossings for pedestrians only that pass two one-lane carriageways and between them a tramline with two tracks, elevated by 20 cm with curbes, without any island.--Ulamm (talk) 10:56, 2 September 2014 (UTC) - Now I have tried to write a comprehensive concept, see Ulamm/Regulations and smoothness of crossings--Ulamm (talk) 08:46, 3 September 2014 (UTC)+Ulamm (talk) 10:06, 27 September 2014 (UTC) Proposal: signed=no (was =yes) In many situations, for example with snow covered or badly maintained streets, the markings on the street ("zebra stripes") are invisible. However, crossings that are marked with a pedestrian crossing sign are still obvious. It seems to me that the presence of traffic signs is an important indication of how safe a unmarked crossing is. I propose the addition of a tag signed=yes|no for crossing=uncontrolled. --Pbb (talk) 19:50, 7 September 2014 (UTC) - I think, if a traffic sign shows that there is a zebra crossing, it is a zebra crossing, legally, even if the zebra markings on the asphalt are faded out.--Ulamm (talk) 22:23, 5 October 2014 (UTC) I approve this proposal. In Pittsburgh, most crosswalks don't have signs; however, many have signs, and some even have warning lights and signs (especially when not near intersections). Some way to tag signed crosswalks may provide an idea of how important/busy/dangerous the crosswalk is. --Abbafei (talk) 07:31, 15 January 2015 (UTC) As most are signed, this should be assumed hence only need extra tag for those that are not signed. - Why not use crossing=unmarked in this case (because when there's no sign, there's no crossing), and add something like crossing=unofficial or =emergent when there are neither markings no signs? --Zverik (talk) 11:31, 27 June 2016 (UTC) - What would you do with a crossing that is marked but has no sign ? What about a crossing that has a warning sign well in advance of the crossing but nothing at the crossing ? -- pmailkeey 2016:6:28 Levels of sidewalk (and perhaps cycleway) and carriageway Advanced (modern) traffic authorities tend to raise the level of entrances of minor streets to the level of the sidewalk and (if there is one on the same level) of the cycletrack. In some places even whole intersections are risen. - This way there is kerb=no. - In Germany, cycletracks of major roads crossing entrances of minor roads nowadays have to be marked, but sidewalks at the same sites are not marked. Where the minor road is recorded with one single line (as it is mostly) and the cycleway of the major road is drawn separately, it would be useful to a highway=crossing on the crossing point of both lines. If neither a cycleway nor the sidwalk has an own line, the node may be set on the minor road near to the main road. Tags oth this node should describe the standard layouts with as few tags as possible: - Cycletrack + sidewalk on sidewalk level, cycle crossing with priority="right of way" for the marked cycleway, foot crossing unmarked: highway=crossing + crossing=priority ?? + ?? - sidewalk only, crossing on its level: highway=crossing + ?? - Cycletrack + sidewalk on carriageway level, cycle crossing with priority="right of way" for the marked cycleway, foot crossing unmarked: highway=crossing + crossing=priority ?? + ramp=steep VS. intermed VS. Smooth ?? + kerb=* (normally different values for bicyle and foot crossing) + ?? - sidewalk only, crossing on carriageway level: highway=crossing + ?? --Ulamm (talk) 22:20, 5 October 2014 (UTC) Do not use crossing=unmarked A footway crossing a road without further markings does not require a crossing tag. That the footway crosses is sufficiently described by sharing a node with the road. Adding crossing tags here only confuses data consumers, such as satnavs warning motorists that pedestrians have special rights at particular crossing types, which is not the case here. --Polarbear w (talk) 23:44, 6 February 2016 (UTC) - But the point of mapping things is so that people know things are there. Unmarked crossings do exist and it's therefore valid that they are marked on the map. A footway may cross a road where there isn't an unmarked crossing. I do see your point though. However, from a mapper's POV, if the path just shares a node with the highway, it looks like there's a crossing that's undescribed (no detail) to mark it as 'unmarked' clarifies this. Also, a shared node with tactile paving without specifying a crossing may look like a mapping error. Is it not valuable to motorists to be warned there may be pedestrians crossing the road when you generally don't expect it ? Example -- pmailkeey 2016:2:7 - Sometimes, a crossing may be visible on the ground (due to lowered kerbs, tactile paving etc.), but still have no painted markings. In my opinion, that's a good use case for crossing=unmarked. --Tordanik 17:09, 7 February 2016 (UTC) - In my example above, it's a 60mph road without footways either side yet a footpath crosses the road. While I agree the path sharing the highway node would appear sufficient, due to the speed and not expect peds crossing, I feel it's worth marking - as for that road even one ped crossing there would make it a relatively high ped use section of the road! -- pmailkeey 2016:2:7 - The main page recommends tactile paving to on the node representing the kerb; and the dropped kerb belongs there as well. That's fine with me, both are aids for the blind and the wheelchair user to find or pass the kerb. This does not affect motorists. What bothers me is inflationary use of crossing=unmarked in such situations, namely at every corner of a block when people are micromapping sidewalks. It causes my satnav, namely osmand, to shout the crossings continuously, which annihilates the effects of a warning. --Polarbear w (talk) 19:42, 7 February 2016 (UTC) - I agree that tactile and sloped kerb info would be better at the kerb node as that would give clarity where one side differs from the other. With editor iD, the crossing key automatically asks for tactile/slope information - and this perhaps ought to be removed. I also agree that crossing=unmarked should not be used where roads cross each other as that is common sense! -- pmailkeey 2016:2:7 - If lanes of a road are shown as separate ways when a kerb barrier or two lies between them, why shouldn't sidewalks ? TBH I'd prefer to map every lane as a way as it'd add clarity and convenience at junctions. Maybe we'd new a new node type - a 'group node' comprising n linked nodes, one per lane. As it is, it's not difficult already to find sidewalks separately mapped - especially near crossings. -- pmailkeey 2016:2:8 - Please, do not. There is a strong consensus in OSM that all lanes are tagged on one way unless the road is physically divided. Changing that will not only create chaos at junctions, it will also break routing. Finally, it is off topic on this discussion page.--Polarbear w (talk) 17:36, 8 February 2016 (UTC) Island? When is a crossing considered to have an island? According to the wiki page, a crossing tagged with crossing=island is "a crossing with a small traffic-island for pedestrians in the middle of the road." If a crossing crosses two oneway roads, which some bigger roads are sometimes modelled as, with an island in between the roads, should the crossing still be tagged with crossing=island considering the fact that the island is now in between the two roads? —Kri (talk) 23:10, 25 September 2015 (UTC) - Crossings don't have islands. It's either a single crossing or there's more than one crossing to cross all the highways. --pmailkeey - I don't think OSM changes the rules just for this occasion - I certainly don't. If you can't drive across the island, then two ways is the way it has to be according to OSM. --pmailkeey 2016:3:28:16:30 - Typically roads in osm are not devided for such small features like islands. Nodes like 2284295589 are perfectly fine. --Klumbumbus (talk) 11:50, 9 July 2016 (UTC) - The OSM database is not only used to draw a graphical map, it is also used for routing and navigation. Splitting a road for a minor feature makes mapping other things more complicated, such as mapping turn:lanes, thus the community has moved away from minor splittings. Similarly, a cycle track is added as an attribute to a highway and not drawn separately, despite having a kerb.--Polarbear w (talk) 14:53, 9 July 2016 (UTC) - Getting back to the original question, I would not expect crossing=island to refer to an island area that has been explicitly mapped with geometry. It seems to imply a "mini-island" in the highway (in the same sense as how a mini-roundabout differs from a roundabout; see Traversable). The doc should be more clear about this. The tagging could be better as well. Mrwojo (talk) 19:33, 9 July 2016 (UTC) "traffic_signals" I've been editing crossings with =traffic_signals for quite a while now but have only just found out OSM is using "traffic_signals" to mean pedestrian signals. No one else in the UK does this and traffic_signals means signals for motor traffic only. If you want to tag a crossing with pedestrian signals, use that terminology, don't use 'traffic_signals' as clearly you will confuse everyone. pmailkeey 2016-07-08 - highway=traffic_signals is used for the main traffic lights for motor traffic, typically on a road junction. The combination of highway=crossing + crossing=traffic_signals is used for a pedestrian-only crossing of a road that is regulated with traffic lights, i.e. red and green lights. OSM needs to unify tags independent of local terminology because it needs to be understood worldwide, and needs some structure for efficient processing.--Polarbear w (talk) 21:40, 8 July 2016 (UTC) - If that means a typical cross-roads with traffic lights and the ped crossings have simply been added to the junction arms such that the ped crossings are neither Puffin or Pelican then I agree but this doesn't appear how the wiki explains it. pmailkeey 2016:7:8:22:6 GMT Key:crossing This topic needs a proper discussion as it is a complete mess. - Why do we have it at all when other way crossings are not tagged specifically ? - Terminology is all over the place - Unclear what purpose it is supposed to have - Kerb data issues by having all data on one node - Throws up multi-node issues that are currently badly/not handled - appears to contradict OSM mapping wrt 'islands' - Editors not in agreement with 'OSM' in tagging conventions pmailkeey 2016:7:8 "crossing=no" confusion It appears some people are finding it difficult to accept both tagging of highway=crossing + crossing=no despite the fact that's the way it's explained in the wiki. It is a 'positive negative' to remove doubt in cases where it would be expected to have something but it is not there. crossing=no is, by definition, a sub-tag of highway=crossing or railway=crossing - so these tags need to be included. (according to history by user Pmailkeey001 10:58, 9 July 2016) - Adding crossing=no to highway=crossing is a trolltag. The tag highway=crossing implies that there is a crossing and data readers should not be expected to have to read anything else.--Andrew (talk) 11:09, 9 July 2016 (UTC) - "despite the fact that's the way it's explained in the wiki" Before your edits it was explained that this combination does not make sense. (Please sign your comments with --~~~~) --Klumbumbus (talk) 12:06, 9 July 2016 (UTC) - Stop edit wars such as undoing reverts from experienced users. The wiki documents how people use particular tags. You cannot turn an agreed definition on its head, just based on your personal opinion, or just because you understand something differently. You have already been banned from the Tagging list a year ago, this can happen here as well.--Polarbear w (talk) 15:06, 9 July 2016 (UTC) - Currently the relevant section reads "As crossing=no excludes the existence of a crossing, the combination of highway=crossing and crossing=no is invalid. ". What it doesn't say is "whether you are allowed to cross the road at a crossing=no". It'd be great if someone who claims to understand this page could edit it to say what it is supposed to mean. Should it cover (where there appears to be infrastructure, but no crossing as such) or (where there may be no infrastructure at all)? --SomeoneElse (talk) 21:33, 1 August 2016 (UTC) Crossings at intersections Parts of this page don't seem to have in view the case of marked pedestrian crossings at intersections. For example, for crossing=traffic_signals, the page says "Position this tag where the crossing-traffic (pedestrian, bicycles) have their own traffic lights," which seems to exclude the case when pedestrians use the same signals as cars. And the page doesn't say how to map crossing=no; do you put it on the intersection node or add another node at the location where the no-crossing sign is located, just past the intersection? And it doesn't seem to have in view the use here of putting crossing=no at an intersection where crossing is allowed in order to direct pedestrians to the separately marked crosswalks. Germyb (talk) 19:25, 27 August 2017 (UTC) No unmarked crosswalk In some US states, if two roads intersect at an angle (i.e., a significant angle), then there is generally an unmarked crosswalk, but if you have two one-lane roads that merge into one one-way two-lane road, then there's no "intersection" and no unmarked crosswalk. Pedestrians can still cross if they want to in the same way they would normally cross a street away from an intersection, but cars do not have to yield the right of way to them. Is there a way to mark such meetings of roads? crossing=no would indicate crossing is not legal. If I don't mark it at all, it may be assumed that there is an unmarked crosswalk. Does crossing=unmarked imply that cars must yield? Germyb (talk) 19:47, 15 September 2017 (UTC)
https://wiki.openstreetmap.org/wiki/Talk:Key:crossing
CC-MAIN-2018-30
refinedweb
6,643
60.24
Hello, I need to write code for the JPEG encryption. I got the sample source code. To write my client app, I need to provide the parameters. B I am struct-king with providing one parameter i,.e ImageDataPtr class method i,.e UIC::ImageDataPtr dataPtr; dataPtr.p8u = //what I need to provide here; could you please suggest me. Regards, sathish. Link Copied Hi Sathish, As far as I see it's a kind of "mutable" pointer to image data buffer. Basically, you need to typecast you buffer pointer to requested data type, for example, dataPtr.p8u = (Ipp8u*)dataPtr.m_myBufferPointer; I cannot say more without clue of your implementation specifics. Hi Sergey, Thank you for the reply. what is the image data buffer here. what information we need to store. do we need to store the pixel value in the buffer. Actually I am providing like this CStdFileInput inFile; UIC::BaseStream::TStatus tStatus; UIC::BaseStream::TSize nCnt; tStatus = inFile.Open(m_cpIfnName); if (UIC::BaseStream::StatusOk != tStatus) return false; const int nSize = m_nInWidth * m_nInHeight * m_nInBands; ptr1 = new unsigned char[nSize]; tStatus = inFile.Read((Ipp8u*)ptr1, nSize, nCnt); if (UIC::BaseStream::StatusOk != tStatus) return false; But this code is not working for all the input files. I do have different file formats like .tiff , .img so on . does this Read() method will work for all the file formats?. or which info we need to store in that buffer. could you please comment. Regards, sathish. The list of image formats supported by UIC depends on version of IPP samples used for prototyping. In IPP 7.0.x samples the list was as follows: bmp, dds, jpg, j2k, jxr, png, pnm, tiff. For IPP 7.1.x samples only bmp, jpg, j2k, jxr and png are supported. If you want, you may take IPP 7.0.7 samples from for TIFF codec. You need uic/src/codec/image/tiff/* source files from that package. Regarding other image file formats, UIC doesn't support them. You may create your own codecs based on BaseImageDecoder abstract class. BMP file decoder can be taken as an example of simple image decoder. Look into UIC::OwnBMPDecoder::ReadHeader and UIC::OwnBMPDecoder::ReadData functions of BMP decoder. Yes , IPP is supporting for limited formats only. Here I am doing encoder(writing any raster file info into JPEG). In this case how to provide the input buffer. what information it will have. dataPtr.p8u = (Ipp8u*)dataPtr.m_myBufferPointer; what information m_myBufferPointer will have here. Regards, sathish. Hi Sirgey, I don't have any idea on Intel IPP. I am strucking with providing the input buffer. could you please help me in this regard. Regards, sathish. OK, let's look how it's done in BMP encoder as in simplest image encoder. In fact, you don't need UIC::ImageDataPtr objects to do image encoding. There are too much unnecessary C++ code in UIC, which can be avoided in real applications. In BMP's UIC::OwnBMPEncoder::WriteData() function the usage is Ipp8u* ptr; ... const ImageDataPtr* data = m_image->Buffer().DataPtr(); ... ptr = (Ipp8u*)data[0].p8u + step * (height - 1); ... for(i = 0; i < height; i++) { status = m_stream->Write(ptr - i * step,bmpwidth,cnt); ... } I think, for example, that there is no need in local "ImageDataPtr *data" object, if using Ipp8u* ptr; ... ptr = m_image->Buffer().DataPtr()->p8u + step * (height - 1); ... for(i = 0; i < height; i++) { status = m_stream->Write(ptr - i * step,bmpwidth,cnt); ... } Then, ImageDataPtr object comes from UIC::OwnBMPEncode object as return type of DataPtr() function. This is just to make DataPtr output universal for all possible image data types. If your image codec deals with "unsigned char" pixels only (for example, "unsigned char" RGB - one byte for R, one for G and one for B), you could define DataPtr as "unsigned char *DataPtr()" and could have no need for ImageDataPtr structure (union). Actually, I don't understand how did you stick to ImageDataPtr in your code. Suppose, you have your image raster data in buffer of "float" pixels. You can create ImageDataPtr object at any time with ImageDataPtr myDataPtr; ... myDataPtr.p32f = myFloatRasterImageBuffer; If you like, we can continue discussion in private thread ("Send Author A Message" button) to keep details your code confident and to not mess the forum threads. Now, I am understading littel better now. here m_image->Buffer().DataPtr() means pixel data. Please confirm. If that is the case how to fill the pixel info into buffer. Actually I have pixel values row by row. I would like to continue the discussion in the private thread but when I click on "Send Author A Message" button it showing it as "No messages available". Regards, sathish. Sathish S. wrote: here m_image->Buffer().DataPtr() means pixel data. Please confirm. If that is the case how to fill the pixel info into buffer. Actually I have pixel values row by row. Exactly. Sathish S. wrote: I would like to continue the discussion in the private thread but when I click on "Send Author A Message" button it showing it as "No messages available". Oops, this is forum engine problem. Sorry. Thank you for the confirmation. Actually I am using some input image having the following properties width = height = 9240 and Gray scale image i,.e nOfBands = 1 so unsigned char* dest_buff = new unsigned char[widh * height * nOfbands] In our code we have some functions that will return the pixel data for each row. how to fill the dest_buff row by row. could you help me in this regard. Regards, sathish I did the development for 8bit data single band image . Now, it is working fine. But I am strucking with 3 band data . because I will get the info layer by layer . how should I provide this info to the ImageDataPtr could you please guide me in this regard. Regards, sathish. In fact, there is no difference in ImageDataPtr usage between 1 byte-per-pixel or 3 bytes-per-pixel formats. In both cases with .p8u union data member you point to the first byte of your image data buffer. If you get image color planes in different buffers, each of them is of [width*height*1] size, you can combine them into single image data buffer of [width*height*3] size using "ippiCopy_8u_P3C3R" function before encoding. Thank you so much for the information. how to use the ippiCopy_8u_P3C3R function ,because I didnt found the help for this function. could you please provide the sample code for that function usage. Regards, sathish If I am not wrong, the usage should like this: const int width = 1920, height = 1080; Ipp8u red[width * height * 1]; Ipp8u green[width * height * 1]; Ipp8u blue[width * height * 1]; Ipp8u combImage[width * height * 3]; Ipp8u* pColorPlanes[3] = { red, green, blue }; IppiSize roi = { 1920, 1080 }; ippiCopy_8u_P3C3R(pColorPlanes, width, combImage, width * 3, roi); Hi Sergey, Here is my code. This code is working correctly for single band image uint8 data. But I need to make it work for 3 band uint8 data. I used the function ippiCopy_8u_P3C3R() to combine the three layers data into single buffer . But my output image is not mathching with input. could you please see the code and please let me know my faults. UINT64 numberOfPixels = numberOfBands * width * height; UINT8* data = emsc_New(numberOfPixels, UINT8); int cnt = width * height; if (numberOfBands == 1) { for (UINT32 band = 0; band < numberOfBands; ++band) { data = (UINT8*)(prs->datalayer[g_vbands[band]]->data); } }) { test1 = (UINT8*)(prs->datalayer[g_vbands[0]]->data); test2 = (UINT8*)(prs->datalayer[g_vbands[1]]->data);); } UIC::BaseStream::TStatus tStatus; UIC::BaseStream::TSize nCnt; if (UIC::BaseStream::StatusOk != tStatus) return false; int du = (((PRECESION & 255) + 7) / 8);//here #define PRECESION 8 int LineStep = m_nInWidth * m_nInBands * du; //construct and init the jpeg encoder UIC::JPEGEncoder uicEncoder; UIC::Image srcImg; ExcStatus resStatus = ExcStatusOk; uicEncoder.Init(); //attach the output stream with output buffer to the encoder CStdFileOutput outFile; tStatus = outFile.Open(m_cpOfnName); if (!BaseStream::IsOk(tStatus)) return false; resStatus = uicEncoder.AttachStream(outFile); if (resStatus != ExcStatusOk) return false; JCOLOR dstClr = JC_UNKNOWN; switch (m_nInBands) { case 1: dstClr = JC_GRAY; break; case 3: dstClr = JC_RGB; break; case 4: dstClr = JC_CMYK; break; default: dstClr = JC_UNKNOWN; break; } //attach the input source image with input buffer UIC::ImageSamplingGeometry samplingGeometry; samplingGeometry.SetRefGridRect(UIC::Rect(UIC::Point(0, 0), RectSize(m_nInWidth, m_nInHeight))); samplingGeometry.ReAlloc(m_nInBands); samplingGeometry.SetEnumSampling(UIC::S444); UIC::ImageDataOrder imageDataOrder; imageDataOrder.ReAlloc(UIC::Interleaved, m_nInBands); imageDataOrder.PixelStep()[0] = m_nInBands; imageDataOrder.LineStep()[0] = LineStep; imageDataOrder.SetDataType(T8u); UIC::ImageDataPtr dataPtr; dataPtr.p8u = data; srcImg.Buffer().Attach(&dataPtr, imageDataOrder, samplingGeometry); srcImg.ColorSpec().ReAlloc(m_nInBands); UIC::ImageEnumColorSpace srcClrSpace = m_bGrayScale ? UIC::Grayscale : UIC::BGR; srcImg.ColorSpec().SetColorSpecMethod(UIC::Enumerated); srcImg.ColorSpec().SetComponentToColorMap(UIC::Direct); srcImg.ColorSpec().SetEnumColorSpace(srcClrSpace); for (int i = 0; i < m_nInBands; i++) { srcImg.ColorSpec().DataRange().SetAsRange8u(255); } resStatus = uicEncoder.AttachImage(srcImg); if (resStatus != ExcStatusOk) return false; JSS sampling = m_bGrayScale ? JS_444 : JS_422; resStatus = uicEncoder.SetParams(JPEG_BASELINE, dstClr, sampling, 0, 0, 75); if (resStatus != ExcStatusOk) return false; //encode the image to the output File resStatus = uicEncoder.WriteHeader(); if (resStatus != ExcStatusOk) return false; resStatus = uicEncoder.WriteData(); if (resStatus != ExcStatusOk) return false; outFile.Close(); return (true); could you please help me. Regards, sathish. Hi Sergey, could you please comment on my above email. Regards, sathish. /) { pSrc8u[0] /*test1*/ = (UINT8*)(prs->datalayer[g_vbands[0]]->data); pSrc8u[1] /*test2*/ = (UINT8*)(prs->datalayer[g_vbands[1]]->data); pSrc8u[2] /); } What do you mean by "output image is not mathching with input" ? After decoding you get incorrect image ? Distorted picture ? Wrong colors ? Could you step-by-step go through "uic_transcoder_con" sample tracking the same operations you want to implement ? Actually, I am getting the wrong colors and also the incorrect pixel values for one layer3. I have hardcoded three layers data with the following memset(pSrc8u[0], unsigned char(0), cnt); memset(pSrc8u[1], unsigned char(0), cnt); memset(pSrc8u[2], unsigned char(0), cnt); status = ippiCopy_8u_P3C3R(pSrc8u, width, data, width * 3, roi); In the outputfile, I am getting the following pixel vaues Layer 1 : All pixel values are zeroes layer 2 : All pixel values are 135 layer 3 : All pixel values are zeroes Actually, It has to display all the layers pixel values as zero. If I am not wrong. But, why layer 2 is showing it as 135. here I provided the hardcoded values and provided the buffer to the ImageDataPtr. Any idea why its behaving like that. Regards, sathish. Forgot to add the parameters for the above test resStatus = uicEncoder.SetParams(JPEG_BASELINE, JC_YCBCR, JS_444, 0, 0, 75); could you help us in this regard. Regards, sathish.
https://community.intel.com/t5/Intel-Integrated-Performance/What-is-the-ImageDataPtr-class/td-p/1046885
CC-MAIN-2021-31
refinedweb
1,734
51.55
Fabulous Adventures In Coding Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric. A while back I was discussing the differences between VBScript's For-Each and JScript's for-in loops. A coworker asked me today whether there was any way to control the order in which the for-in loop enumerates the properties. He wanted to get the list in alphabetical order for some reason. Unfortunately, we don't support that. The specification says (ECMA 262 Revision 3 section 12.6.4): The mechanics of enumerating the properties. Our implementation enumerates the properties in the order that they were added. (This also implies that properties added during the enumeration will be enumerated.) If you want to sort the keys then you'll have to do it the hard way -- which, fortunately, is not that hard. Enumerate them in by-added order, add each to an array, and sort that array. var myTable = new Object(); myTable["blah"] = 123; myTable ["abc"] = 456 myTable [1234] = 789; myTable ["def"] = 346; myTable [345] = 566; var keyList = new Array(); for(var prop in myTable) keyList.push(prop); keyList.sort(); for(var index = 0 ; index < keyList.length ; ++index) print(keyList[index] + " : " + myTable[keyList[index]]); This has the perhaps unfortunate property that it sorts numbers in alphabetical, not numeric order. If that bothers you, then you can always pass a comparator to the sort method that sorts numbers however you'd like. Strange. Seems like all the keys are converted to strings. Is that because they become field names on the object? Here's what I mean (sorry about the verbosity): //-------------------------------------------------------------------------// TestSorting()//-------------------------------------------------------------------------function TestSorting(){ print(); print('TestSorting()'); print('-------------'); var myTable = new Object(); myTable["Blah"] = 123; myTable ["abc"] = 456; myTable [1234] = 789; myTable ["def"] = 346; myTable [345] = 566; var keyList = new Array(); for(var prop in myTable) keyList.push(prop); keyList.sort( Comparer ); for( var index in keyList ) print( 'myTable[{0}] = {1}', keyList[index], myTable[keyList[index]] );}//-------------------------------------------------------------------------// Comparer()// Silly sort method of the collection seems to convert all the indices to // strings, so that it is not easy to sort the numbers separately from the// strings.//-------------------------------------------------------------------------function Comparer(a,b){ var result = -1; print('Comparer(): {0} is a {1}, {2} is a {3}', a, typeof(a), b, typeof(b) ); if( typeof(a) == 'number' ) { print('{0} is a number.',a); if( typeof(b) == 'number' ) result = a - b; else result = -1; } else { if( typeof(b) == 'number' ) result = 1; else { if( a.toLowerCase() == b.toLowerCase() ) result = 0; else if( a.toLowerCase() > b.toLowerCase() ) result = 1; else result = -1; } } if( result == 0 ) print( '{0} and {1} are equal.', a, b ); else if( result < 0 ) print( '{0} is less than {1}.', a, b ); else print( '{0} is greater than {1}.', a, b ); return result;}//-------------------------------------------------------------------------// print()//// Shorthand for WScript.echo() that also does positional parameter// substitution (as in C#).//-------------------------------------------------------------------------function print(msg){ var args = print.arguments; if(args.length==0) var msg = ''; // Allows for "print();" else { // Parameter substitution à la C#: for( var i = 1; i < args.length; i++ ) while( msg.indexOf('{' + (i-1) + '}') >= 0 ) msg = msg.replace( '{' + (i-1) + '}', '' + args[i] ); } WScript.echo(msg);}TestSorting();The results printed out are:TestSorting()-------------Comparer(): def is a string, 345 is a stringdef is greater than 345.Comparer(): def is a string, abc is a stringdef is greater than abc.Comparer(): def is a string, 1234 is a stringdef is greater than 1234.Comparer(): def is a string, Blah is a stringdef is greater than Blah.Comparer(): abc is a string, 345 is a stringabc is greater than 345.Comparer(): abc is a string, Blah is a stringabc is less than Blah.Comparer(): Blah is a string, 1234 is a stringBlah is greater than 1234.Comparer(): Blah is a string, 345 is a stringBlah is greater than 345.Comparer(): abc is a string, 345 is a stringabc is greater than 345.Comparer(): abc is a string, 1234 is a stringabc is greater than 1234.Comparer(): abc is a string, 345 is a stringabc is greater than 345.Comparer(): 345 is a string, 1234 is a string345 is greater than 1234.myTable[1234] = 789myTable[345] = 566myTable[abc] = 456myTable[Blah] = 123myTable[def] = 346So seems like it would by quite tedious to write all the code necessary to sort by keys such that the numeric keys are sorted numerically... Yes, that was what I intended to imply by the last two sentences in the post. All object property slot names are strings, whether they are initially numbers or not. However, it is not _particularly_ tedious to write a comparator which compares numbers numerically. Just check both strings to see if they are numbers, if they are, compare them as numbers, otherwise compare them as strings. Ah, I thought you were implying that the default sort would be alphabetical because the keys would be converted to string for sorting purposes, not that the keys were converted to string in the first place. The difference is that the comparer would be simpler in that you could simply check the type. Since the comparer is getting all strings no matter what, you have to look at the string and assume what data type it was in the first place "123" may have been an int or it may have been the string "123". I was thinking that the key object type was maintained and would work like this does in Python, for example: myTable = {} myTable["Blah"] = 123 myTable ["abc"] = 456 myTable [1234] = 789 myTable ["def"] = 346 myTable [345] = 566 myTable ['345'] = 999 # This '345' key is distinct from the 345 entry. keys = myTable.keys() def KeyComparer( a, b ): result = 0 if type(a) == type(b): if type(a) == type(''): result = cmp(a.lower(),b.lower()) else: result = cmp( a, b ) elif type(a) == type(''): result = -1 else: result = 1 description = 'equal to' if result > 0: description = 'greater than' elif result < 0: description = 'less than' print 'KeyComparer: %s is %s %s.' % (str(a),description, str(b)) return result keys.sort(KeyComparer) print 'Keys: %s' % keys So I guess in JScript you would need to use parseFloat or a regular expression and assume all numerical looking keys were originally numbers. While we're on the subject of semantic differences between seemingly similar syntaxes, let me just take this opportunity to quickly answer a frequently asked question: why doesn't for in enumerate a collection? Is there any similar code that could enumerate properties for ActiveXObjects, or would I have to use IUnknown in C++ ... code such as this just doesn't work!? var objCadImage = new ActiveXObject("TurboCAD.Drawing") for( n in objCadImage){ document.write(n+"<br>"); }
http://blogs.msdn.com/b/ericlippert/archive/2003/10/01/53134.aspx
CC-MAIN-2015-18
refinedweb
1,099
72.46
How can I speed up symbolic function evaluation? I have a few hundred lines of code that calculate a system of ODEs. The resulting system of several hundred to several thousand equations take a long time to integrate. (I'm using SciPy's integrate interface; testing on a small case suggested it's several times faster than GSL's ode_solver for my problem.) Of course, most of the time is spent in evaluating my equations. I'm already using fast_callable to speed up the calculations. It made a wonderful difference. But it's still taking hours or even days for the larger systems. I want to put this integration inside an optimizer, so any speed gain is great. Stealing the example from the reference manual (), I'm currently doing something like the following. import scipy from scipy import integrate var('x, y') mu = 10.0 dy = (y, -x + mu*y*(1-x**2)) dy_fc = tuple(fast_callable(expr, domain=float, vars=(x,y)) for expr in dy) def f_1(y,t): return [f(*y) for f in dy_fc] xx=scipy.arange(0,100,.1) yy=integrate.odeint(f_1,[1,0],xx) I don't think I can speed up the integrate routine. Can I do anything to speed up f_1? Thanks!
http://ask.sagemath.org/question/7993/how-can-i-speed-up-symbolic-function-evaluation/
CC-MAIN-2015-11
refinedweb
210
67.15
I think it would fall into the piratical joke section. I think it would fall into the piratical joke section. whatever.... never said I knew everything, that was your ill guided mind thinking technicalities, cheep shot. .. yep yeah just a bunch of "raw data" and text. I thought I was doing something wrong until I cat the file. Normally one could do something like this.but WE have no real idea of the format in the file ... from that other program I used it is not a valid tcpdump file, and still if binary one still needs to know what order to get the information. Yes/?/ Linux tcp dump file. $ sudo tcpdump -c 3 -w dumpTCP -i wlan0 tcpdump: listening on wlan0, link-type EN10MB (Ethernet), capture size 262144 bytes 3 packets captured 272 packets received by... I have no idea how it works under the hood. I only know it is now acting like a 1d array because one no longer has to reference both whatever them [ ] <<--- are called. #include <iostream>... appearances can be deceiving. :D :D you need to know what you're looking for in the order you're looking for it so you can scan it in the same order using the proper data types to get it. or using delimiter strtok to find key words... yeah < > I had to switch gears, #include <iostream> #include <vector> using namespace std; int main() { ........... yeah, and I am getting major warnings for your pointers , and what is the k++ for??? just take out the rewind, I was fiddling with it and forgot that one. I do not know what your dumpfile looks like inside of it but make this chagnce to see what you're getting. you where trying to print out str when it has nothing in it. printf("\nftell: %d\n",... printing out a 2d char array, In C++ it just wacks the hell out of it, and comes up with its own abbreviations for what is in the array, and in C it chops off part of the last word in the first... to get rid of the zero array? you mean the element? just do not use it. start with 1 instead. {keyboard broke, no w got a vv instead?) first think of what information you need for a store to... I;ve always an into issues using direct assignment of a char array. let e go check that over and give it a try, ( again) scanf is not one of my favorite functions. before time runs out to fix it in... #include <stdio.h> #include <stdlib.h> /* * 1. Declare a structure which contains minimum two elements * 2. in main function declair a local variable of the structure * 3. read in the... it does not really have anything to do with code blocks, code blocks is just a platform to write your code in and it gives you a button to "push" to make compiling your code a little easier. You just... Take a good hard look at how fscanf is being used without the 'file' included in it. #include <stdio.h> #include <stdlib.h> #include <string.h> struct storagemanipulation { char... you know that really takes the fun out of hacking, and mods to the game. Look at doom and its other games. you could even set it up for long term to write another package one needs so they can hack... C Language Keywords that is what is called pseudo code. Used to give the other an idea of what is to be written for a loop in this case to work, what you make the do something be is entirely up to the programmer. ... if you take a good look at this you may see what LaserLight is speaking about in the usage of fscanf it has a specif set of rules that the format of the function in the parameters of the function for... spoon feeding, hum, that was spoon feeding, it is not always wrong to just give someone the answer, I've seen others in here doing it including you. That too is a form of learning. just showing... I found this, it might help, I have not read it. Multiple consoles for a single application - CodeProject
https://cboard.cprogramming.com/search.php?s=e5495471b72bf1832ae22b6ef343098e&searchid=7428277
CC-MAIN-2021-43
refinedweb
707
82.54
Enterprises value continuous integration and testing since it allows for seamless, parallel testing. Companies can use Jenkins pipelines to automate testing for developers. If you’re an enterprise developer looking to maximize your company’s DevOps, this tutorial can help you quickly become knowledgeable on building pipelines as code. In this tutorial, see how to install and configure Jenkins on the IBM Cloud Kubernetes Service. Learn how to use Helm to install Jenkins and then set it up to run continuous integration (CI) pipelines on the same Kubernetes cluster. The pipeline that you create runs a test stage. If the test is sucessful, the pipeline creates a build stage next. After both of those stages successfully complete, the container that’s built in the build stage is pushed to the IBM Container Registry. By using the Container Registry, you have the added benefit of a built-in Vulnerability Advisor that scans for potential security issues and vulnerabilities (and it’s completely free). Prerequisites To complete this tutorial, you need to do the following: - Create a Kubernetes cluster via the IBM Cloud Kubernetes Service installation instructions. - After the Kubernetes cluster is ready, follow these instructions to install the Helm client. - Note: You need the paid tier of IBM Cloud Kubernetes Service, because you need load balancing capabilities to complete this tutorial. See the catalog for more information. - Get access to the Container Registry. Estimated time Completing this tutorial should take about 15 minutes. Steps 1. Install Tiller You should already have the Helm client installed, per the prerequisites, so now you need to to install the server-side component of Helm called Tiller. Tiller is what the Helm client talks to and it runs inside of your cluster by managing the chart installations. (For more information on Helm you can check out this Helm 101 repo.) $ helm init Running helm ls should execute without error. If you see an error that says something like Error: could not find a ready tiller pod, wait a little longer and try again. 2. Install Jenkins Prior to installing Jenkins, you need to first create a persistent volume: $ kubectl apply -f volume.yaml Now you can install Jenkins by using the Helm chart in the stable repository. This is the default Helm repository, specifically this chart, which will be installed. This chart has a number of configurable parameters. For this installation, the following parameters need to be configured: rbac.install– Setting this to truecreates a service account and ClusterRoleBinding, which is necessary for Jenkins to create pods. Persistence.Enabled– Enables persistence of Jenkins data by using a PVC. Persistence.StorageClass– When the PVC is created it will request a volume of the specified class. In this case, it is set to jenkins-pv, which is the storageClassNameof the volume that was created previously. Setting this to the same value as the class name from volume.yamlensures that Jenkins will use the persistent volume already created. Here’s how to set the same value: $ helm install --name jenkins stable/jenkins --set rbac.install=true \ --set Persistence.Enabled=true \ --set Persistence.StorageClass=jenkins-pv 3. Log in to Jenkins As part of the chart installation, a random password is generated and a Kubernetes secret is created. A Kubernetes secret is an object that contains sensitive data such as a password or a token. Each item in a secret must be base64 encoded. This secret contains a data item named ‘jenkins-admin-password’, which must be decoded. The following command gets the value of that data item from the secret named ‘jenkins’ and decodes the result. $ printf $(kubectl get secret --namespace default jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo A load balancer is then created for Jenkins. When that’s ready, you can log in with the username admin and the password from the previous step. Run the following commands to determine the login URL for Jenkins. $ export SERVICE_IP=$(kubectl get svc --namespace default jenkins --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}") $ echo 4. Configure Jenkins Configure credentials In order for Jenkins to be able to launch pods for running jobs, you have to configure the service account credentials. Nagivate to Manage Jenkins > Configure Jenkins > Cloud > Credentials, then select “Add.” Configure containers By default, the agent pod contains just one container. However, you are not restricted to running everything on that container. You can add any number of containers to the agent pod. For this pipeline example, you’ll need to add a NodeJS container. This is configured in Manage Jenkins > Configure Jenkins > Cloud. Create a pipeline Pipelines are used to model a build process. Agents are dynamically created for each pipeline run. Those agents are a pod running one or more containers and a pod is just a collection of one or more containers. Each Jenkins agent is one pod and the containers that comprise that pod are what gets configured in the previous step. Steps in the pipeline can be run on any of the containers in the agent pod. First, create a new pipeline item by selecting “Pipeline” and click “OK.” The image below shows the new item screen with “Pipeline” selected. “Test” is the name of our pipeline. The text that follows is the actual pipeline definition that gets entered after you hit “OK” on the page in the screenshot. Then use the following pipeline: pipeline { agent any stages { stage('Test') { steps { container('nodejs') { sh "node --version" } } } } } This simple pipeline only has one stage named ‘Test’. This stage has only one step that executes just one single command. The key part of the step definition is the container('nodejs') statement, which tells Jenkins to run the step on the container named ‘nodejs’ that was configured in the previous step above. Each item that is prefixed with sh will be executed in a new shell. In a real pipeline, this is where you’d do actual work, such as running unit tests. Add additional containers Different steps of the pipeline can run on different containers. This allows you to do things like run tests for different parts of a codebase in language-specific containers. Because the stages execute sequentially, you could also have a ‘deploy’ stage that runs after tests pass to deploy your application. It could be a kubectl pod to deploy to Kubernetes or a helm pod to update an existing chart. 5. Build Docker images Docker images can be built as part of the pipeline. Create another container named build by using the alpine image. The Docker socket from the host also needs to be shared with the agent containers by creating a host path volume. Volume configuration is under Manage Jenkins > Configure Jenkins > Cloud. A new stage can then be added to the pipeline: stage('Build') { steps { container('build') { sh 'apk update && apk install docker' sh 'docker build -t application .' } } } } Push images to the IBM Container Registry Finally, you can push our images to the IBM Container Registry. Automatically pushing images requires an API key: $ ibmcloud iam api-key-create This API key can be passed into the pipeline via environment variables. In the container configuration, add a new environment variable named REGISTRY_TOKEN. Update the build stage to log in to the registry and push the image. stage('Build') { steps { container('build') { sh 'apk update && apk install docker' sh 'docker login -u token -p ${REGISTRY_TOKEN} ng.registry.bluemix.net' sh 'docker build -t application .' sh 'docker tag ${IMAGE_REPO}/application application' } } } } Next steps Congratulations – you completed the tutorial! So what’s next for you, now that you know how to create a Jenkins pipeline? You can extend your Jenkins knowledge by learning how to create a Canary deployment Jenkins and Istio.
https://developer.ibm.com/technologies/devops/tutorials/deploy-and-run-jenkins-on-kubernetes-in-the-cloud
CC-MAIN-2020-50
refinedweb
1,283
56.55
Silverlight, with its powerful text and graphics manipulation capabilities and strong interaction with the scripting DOM, seems to be the perfect engine for a Captcha challenge. Download the Source (18.3KB) Captcha is a challenge-response test used to determine to a degree of confidence that the end user is really human and not a bot. You see Captcha on things like forums, sign up forms, comment posts, and other places that may be succeptible to attacks by scripted bots. Usually, a Captcha involves playing a sound or displaying a distorted image that humans should be able to recognize but would be extremely hard for pattern-matching and/or optical character recognition (OCR) technology to decipher. Because the test is issued by a computer to test for a human, it is often referred to as a reverse-Turing test (a Turing test is designed by humans to test computers). The key to Captcha is that it makes it difficult, if not impossible, for scripted software to determine the answer to the challenge. Asirra is an example of a Captcha challenge that displays random images of cats and dogs. The user is asked to identify only the cats by clicking on them. While easy for humans, this test is extremely difficult for computer algorithms. As I was coding the other day, it occurred to me that Silverlight would be perfect for issuing Captcha challenges. It is very easy and straightforward to manipulate text and graphics to obscure the image, and furthermore, the output is NOT a simple bitmap that a robot could parse. Instead, it is an interactive plugin so for a script to recognize the image, it would have to have its own Silverlight engine and be able to scan and recognize what Silverlight renders. I set out to produce a working example. I purposefully kept to the basics so those of you reading this who are interested have the opportunity to extend and add features. The first step was to create a simple Captcha challenge class to use. namespace SilverCaptcha.MVVM { [ScriptableType] public class CaptchaViewModel { private const string CAPTCHA_KEY = "SilverCaptcha"; private static readonly char[] _charArray = "ABCEFGHJKLMNPRSTUVWXYZ2346789".ToCharArray(); public string CaptchaText { get; set; } public CaptchaViewModel() { char[] captcha = new char[8]; Random random = new Random(); for (int x = 0; x < captcha.Length; x++) { captcha[x] = _charArray[random.Next(_charArray.Length)]; } CaptchaText = new string(captcha); HtmlPage.RegisterScriptableObject(CAPTCHA_KEY, this); } [ScriptableMember] public bool IsHuman(string challengeResponse) { return challengeResponse.Trim().ToUpper().Equals(CaptchaText); } } } The class simply generates a random 8-digit sequence of characters. We supply a list of allowed values to avoid some of the common characters like the number one and letter "I" that could be easily mistaken for one or the other. The property CaptchaText exposes this value. The IsHuman method is decorated with the ScriptableMember tag. This makes it available to the Html DOM so that you can call it directly from JavaScript. To call it, you must register a "key" or handle to the object. This is done in the constructor through the RegisterScriptableObject call. We are giving it a handle in the JavaScript DOM of "SilverCaptcha." We'll see later how this is used. Points of Extension - Add a method to "refresh" the challenge, i.e. generate a new one (hint: to do this, you'll also need to implement INotifyPropertyChanged) - Add parameters to control the challenge (length, use of alpha or numeric, etc) Next, we'll need to show the challenge. Because I chose to use the Model-View-ViewModel pattern (MVVM) we won't need any code behind for the XAML. Instead, everything will be bound in the XAML itself. The XAML I came up with looks like this: <UserControl x: <UserControl.Resources> <MVVM:CaptchaViewModel x: </UserControl.Resources> <Grid Width="100" Height="25" Margin="2" DataContext="{StaticResource CaptchaVM}"> <Grid.Background> <LinearGradientBrush x: <GradientStop Color="LightBlue" Offset="1" /> <GradientStop Color="LightSalmon" /> </LinearGradientBrush> </Grid.Background> <TextBlock FontSize="14" Width="Auto" Height="Auto" HorizontalAlignment="Center" VerticalAlignment="Center" Text="{Binding CaptchaText}" RenderTransformOrigin="0.5,0.5"> <TextBlock.RenderTransform> <RotateTransform Angle="-5"/> </TextBlock.RenderTransform> <TextBlock.Foreground> <LinearGradientBrush EndPoint="0.5,0" StartPoint="0.5,1"> <GradientStop Color="Black" Offset="1" /> <GradientStop Color="Gray" /> </LinearGradientBrush> </TextBlock.Foreground> </TextBlock> </Grid> </UserControl> This is fairly straightforward. By referring to CaptchaViewModel in the resources, an instance is created that can then be referenced using the key. I bind the class to the data context of the main grid using {StaticResource CaptchaVM}. The gradient is used to obscure the image somewhat. Because the class itself is bound to the grid, we can now simply bind the CaptchaText property to the text block. We also give that a slight gradient to make it more confusing to image scanning software, then rotate it against the background. That's all there is to it! Points of Extension - Obviously, you could randomize or parameterize the angle of rotation and other attributes of the text - A more severe grid may help obscure the challenge but may also make it more difficult for human readers - Add a refresh button or icon to refresh the challenge for the user Now, let's use it in a page. The page itself is fairly straightforward: we have a form with a table, and inside the table is a reference to the Silverlight XAP file, a text box for the user to enter their response to the challenge, and a button to click. This section looks like this: <form id="_form1" runat="server" style="height:100%"> <table><tr><td align="center"> <div id="silverlightControlHost"> <object id="silverlightControl" data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="110" height="30"> ... etc, etc ... </object></div></td></tr><tr> <td align="center">Enter The Text You See Above:<br /><input type="text" maxlength="20" id="txtChallenge" /></td></tr><tr> <td align="center"><input type="button" onclick="validateCaptcha();return false;" value=" OK "/></td></tr></table> </form> The key here is that we've assigned the silverlight object an identifier of silverlightControl. If you use the Javascript or control method to load the silverlight, either way you just need a way to point to the object in the DOM for the Silverlight control. The JavaScript function is then very straightforward. We simply call the method we exposed in the Silverlight class that will compare the response to the challenge. That code looks like this: function validateCaptcha() { var silverlightCtrl = document.getElementById("silverlightControl"); var challenge = document.getElementById("txtChallenge").value; var valid = silverlightCtrl.Content.SilverCaptcha.IsHuman(challenge); alert(valid ? "You are human!" : "You don't appear to be human, try again!"); } This is what the final product looks like: As you can see, calling the Silverlight method is as simple as grabbing the control that hosts Silverlight, going into Content, then referencing the tag we gave it when we registered it in the Silverlight code. Then we simply call the method with the contents of the text box and it returns either true or false based on whether or not the response matches the challenge. This is obviously a fast and basic example but it demonstrates just how flexible and powerful Silverlight can be to generate "mini-apps" or controls that you can embed in your existing HTML pages. Download the Source (18.3KB)codeproject
http://csharperimage.jeremylikness.com/2009_08_01_archive.html
CC-MAIN-2017-26
refinedweb
1,204
54.63
When I tried to pass a Gradle property which has a class String to setThreadCount of the TestNG provider, I received a following error Could not find method setThreadCount() for arguments [1] on object of type org.gradle.api.tasks.testing.testng.TestNGOptions. This error didn’t help me figure out what was wrong with my build script. I tried reading --debug and --stacktrace outputs, but neither mentioned a type conversion error, or anything similar. And so I was puzzled why def t_threads = 1 ... setThreadCount(t_threads) worked, but 'def t_threads = test_threads … setThreadCount(t_threads) wheretest_threads is a property came fromgradle.properties` didn’t.
https://discuss.gradle.org/t/non-user-frendly-error-when-string-is-passed-to-testng-setthreadcount/27385
CC-MAIN-2018-51
refinedweb
102
61.46
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool. How to use JabRef (BibTeX) with Microsoft Word 2003 March 25th, 2007 by James · 264 Comments Dec 22, 2016 · I'm creating new login in SQL Server 2008 with. SQL Server 2008 Error 233. 0 – No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Project Brainwave incorporates a software stack that supports a wide range of popular deep learning frameworks, including Microsoft Cognitive Toolkit and Google TensorFlow. Our research team reached that 5.1 percent error. with SQL. NOTE: The issue has been confirmed with Jin Chen at Microsoft. This is on windows 7 x64. I keep getting an error when setting up a linked server in sql server 2008 R2 x64 to a microsoft access database file (mdb or accdb file). I have. They signed up for my On Demand consulting services and we started looking at. Microsoft SQL Server – SQL Server, Microsoft®’s enterprise-level relational. then you might be familiar with this common error that occurs when connecting from WebSphere Application Server to SQL Server 2008: DSRA0010E: SQL State = S0001, Error. To fix this error Go to SQL Server Configuration Manager > SQL Server Services > In the Right Side. SQL Server 2008. ProdName=Microsoft+SQL+ Server&amp;EvtSrc=MSSQLServer&amp;EvtID=-1&amp;LinkId= 20476. Sql Server 2008 / erreur 10054 – Developpez.net – Sqlcmd: Error: Microsoft SQL Server Native Client 10.0 : Login timeout expired. de me connecter depuis le serveur lui-même mais l'erreur 233 SQL Server 2005 & SQL Server 2008 you can have a maximum of nine SQL Server Agent Error Logs. There is no way you can increase this number. By default, the SQL Server Agent Error log is located in "Program FilesMicrosoft. Aug 16, 2015. However, if you do this on a server that is also hosting MS SQL, you will. [298] SQLServer Error: 233, Client unable to establish connection. Hardening SSL & TLS connections on Windows Server 2008 R2 & 2012 R2. Data Quality tools for MS-SQL & SSIS. Call 1-800-Melissa . No Process is on the other end of the pipe.)Microsoft SQL server error:233. to SQL Server 2008 R2 and. the "Microsoft SQL Server, Errorf:233. This is a list of notable port numbers used by protocols of the transport layer of the Internet protocol suite for the establishment of host-to-host connectivity. namespace = Get-CimInstance -CimSession $CIMsession -NameSpace rootMicrosoftSQLServer -ClassName "__NAMESPACE. The number “10” refers to the SQL Server 2008. Microsoft SQL Server Error 233, No Process Is On The Other End Of The Pipe Microsoft SQL Server Error 233, No Process Is On The Other End Of The Pipe, if you are. Try Microsoft Edge A fast and secure browser that’s designed for Windows 10 No thanks Get started Aug 8, 2012. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings. (Microsoft SQL Server, Error: 233). The schema of the SQL Server package configuration table (as shown in step 16 above) includes the following four columns: ConfigurationFilter – consider this as the. Error In Winsock Driver It is really difficult to come up with a single reason for this commonly reported Network Connectivity Error message in Windows 10 and other. Although, Windows 10 should automatically update all the Drivers on your Computer, this. [Fix] “One or More Network Protocols are Missing on This Computer” Error in. RECOMMENDED: Click here to fix Windows errors and improve system performance
http://visionsonore.net/microsoft-sql-server-error-233-in-sql-server-2008/
CC-MAIN-2018-30
refinedweb
594
66.54
Here are some short examples that have provided unexpected results for intermediate-level Python programmers. Add new examples by copying and editing the following sample. Edit your code so it is short and does nothing more than illustrate your problem. Often programming problems are solved by making the effort to create simple examples that reproduce them. This page is meant for problems that appear in a few lines of code after the extraneous fluff has been removed. Things that may appear normal to experienced Python programmers but appear weird: at least when one first encounters them. If you can answer your own question, do so. Otherwise leave the answer part for somebody else to edit. Sample (copy and edit me) Bold print "edit this" Usage >>> edit this Question Edit this question. Answer Edit this answer. Metaclass Instantiation Bold class A(object): def __init__(self): self.__setattr__('x',1) a = A() class M(type): def __init__(cls,cls_name,bases,cls_dict): super(M,cls).__init__(cls_name,bases,cls_dict) cls.__setattr__('y',1) class B: __metaclass__ = M pass Usage >>> import Play Traceback (most recent call last): File "<stdin>", line 1, in ? File "Play.py", line 14, in ? class B: File "Play.py", line 12, in __init__ cls.__setattr__('y',1) TypeError: Error when calling the metaclass bases expected 2 arguments, got 1 Question I thought creating the class B was analogous to creatig the object a. Clearly, my use of __setattr__() has stretched the analogy too far but ... I don't know what's going on. Answer While it is true that - conceptually - a class is an instance of its metaclass, there is still a big difference between class objects and instance objects (Yes, it's confusing). A call to instance.func(a, b) gets translated into instance.__class.__.func(instance, a, b) -- it's a bound method. A call to cls.func(a, b) does not get translated into cls.__class__.func(cls, a, b), because it is an unbound method. It won't get translated into anything and that's why __setattr__() complains, it expects the object whose attribute should get set as its first argument. You could have written M.__setattr__(cls, 'y', 1) instead. Hex values passed to external code ioctl(fd, 0xc0107307) Usage >>> fd = open("/dev/null") >>> import fcntl >>> fcntl.ioctl(fd,0xc0107307) Traceback (most recent call last): File "<stdin>", line 1, in ? OverflowError: long int too large to convert to int Question How do I pass in a hex cmd to ioctl, or some other external function that is expecting an int, not a long? Answer There doesn't seem to be anyway to let python know that you are trying to build an unsigned value, so: def convert32(number): - return int(-(number ^ 0xffffffff)-1) ioctl(fd, convert32(0xc0107307))
http://wiki.python.org/moin/Intermediate%20Conundrums
CC-MAIN-2013-20
refinedweb
463
67.45
Introduction to Maverick By Kris Thompson 01 Aug 2003 | TheServerSide.com Introduction If you cut the fat on all the web presentation frameworks out there, what do you have left over? Maverick. I originally got into Maverick as I was researching other web frameworks for my user group,. What caught my attention was how Maverick claims that it is "Lighter than Struts". Having a strong Expresso background, I had always considered Struts to be the Duplo blocks of frameworks,that is, until I started building the Wafer example with Maverick. Maverick is a lightweight, simple to use web presentation framework based on the MVC design principle but offers a few variations of that design as well. It is agnostic about view technologies and can support JSP (JSTL), Velocity, XSLT and is operated by a simple XML file. As a member of the Wafer Project, I built the Wafer example demo weblog application (). There you can also find a .war file to see it in action. This article will cover the code and experience in building that application using the Maverick framework. This article is for those either getting started on Maverick or who need a quick understanding of how it operates; or those wondering why one would even use this framework. There are a few comparisons to other frameworks, mainly Struts. The Features So what makes Maverick unique? Below are some of the key features of the framework: - Light weight both in API and actual footprint - Support for multiple templating options - Controlled by a simple XML file - Support for Internationalization through the use of Shunts - Stable development life cycle - Extensibility and Pluggability - Performance Getting Started For nearly every framework the heart of it is its controlling mechanism. In Maverick's case it is the maverick.xml file. The file layout is simple and doesn't have a DTD to annoy you. <maverick version="2.0" default- <views> <view id="loginRequired" path="loginRequired.jsp"> <transform path="trimOutside.jsp"/> </view> <view id="loginFailed" path="loginFailed.jsp"> <transform path="trimOutside.jsp"/> </view> </views> <commands> <command name="welcome"> <view name="success" path="weblog/welcome.jsp"> <transform path="weblog/trimOutside.jsp"/> </view> <view name="success" mode="en" path="weblog/welcome.jsp"> <transform path="weblog/trimOutside.jsp"/> </view> <view name="success" mode="de" path="weblog/welcome_de.jsp"> <transform path="weblog/trimOutside.jsp"/> </view> </command> <command name="weblog"> <controller class="org.wafer.ctl.WebLog"/> <view name="loginRequired" ref="loginRequired"/> <view name="loginFailed" ref="loginFailed"/> <view name="success" path="weblog.jsp"> <transform path="trimInside.jsp"/> </view> </command> </commands> As you can see from the above snippet, there are two basic nodes to the XML file: <commands> and <views>. These aren't the only ones but are the most important. A command can have no more then one controller associated with it and that controller may be reused in another command. Each command has a name, controller class (optional) and 1 to many view definitions. Each view element in the command node represents a "flow" option for that command. As shown in the weblog command example above, some views are defined inline (inside the command element like success and some are defined globally like loginRequired. These are simply name representations and you are free to name them whatever you like. The <views>, as mentioned earlier, are a way to globally define a flow for your site which is useful if you have common paths that most, if not all, <command> elements must follow such as those in the loginRequired example above. For Struts users these are similar to <global forwards>. Those views that are either unique or sparsely used can be defined inline with the <command>. When defining a <view> you have the option to specify a transform element. The transform element is similar to a tile ,but not as featureful; however, it is easier to use. The way it reads is the transform element wraps the view JSP. By inspecting the trimInside.jsp file you will notice the following JSTL call, <c:out This basically takes the view's JSP, converts it into a String, and places it in the request attributes with the key of "wrapped". So in the maverick.xml example above, notice the command called weblog and inside it, the <view> called success. Here the weblog.jsp will be turned into a String and placed inside the trimInside.jsp file where the above JSTL call for value{$wrapped}" is found. You can have as many transforms as you like but if you need more then 2 or 3 then you might want to consider using Tiles, or SiteMesh. Controller There are 4 types of Controllers you can extend: - ThrowawayBean2: Easiest of them to use. Uses controller-as-model pattern. - FormBeanUser: Similar to the Struts Action. Allows external beans to be populated as the model instead of the controller itself. - ThrowawayFormBeanUser: Similar to the ThrowawayBean2 but allows external beans to be populated. - ControllerWithParams: Allows you to pass parameters into the controller via the maverick.xml file. ThrowawayBean2 is the controller I choose in the Wafer project and is also the Controller used in the friendbook example that comes with Maverick. The ThrowawayBean2 controller follows the Controller-as-Model pattern in which the controller acts both as an action class and as the model. This is achieved by placing getters and setters in the controller class which, after execution, places the entire controller in the request with the key of "model". With the help of the Apache BeanUtil package, it automatically populates the setters from the request in the controller. Typically, you have a separate class or set of classes to represent the model. In the example below, you will notice the properties that would typically be associated with a Comment object; however, those properties are a part of the ViewComment controller . In one class you have the logic to retrieve the comment and store the values in itself, instead of storing the values in a separate object such as Comment. This approach makes it very easy to build simple applications once you get the hang of the Controller-as-Model pattern. public class ViewComment extends ControllerProtected { protected String subject = ""; protected String body = ""; protected Collection comments; protected Story story; public String getBody() { return body; } public String getSubject() { return subject; } public Collection getComments() { return this.comments; } public Story getStory() { return this.story; } public String securePerform() throws Exception { HttpServletRequest request = this. // Story id String key = (String)request.getParameter("key"); if(key == null || key.equals("")) addError("gen_error", "Invalid key"); if(!this.hasErrors()) { WebLogStory weblogStory = WebLogStory.getWebLogStories(); Story story = weblogStory.getStory(key); this.story = story; this.comments = story.findAllComments(); } if (this.hasErrors()) { return ERROR; } else { return SUCCESS; } } Note: The above class extends ControllerProtected which itself extends a few other classes (see below) which are very helpful since they provide a lot of fundamental features used on a website, like error handling and authorization. While not part of the core Maverick API, they are a part of the friendbook example bundled with Maverick. ControllerProtected → ControllerAuth → ControllerErrorable → ThrowawayBean2 The above controller, ViewComment, is a presentation state which means its primary purpose is to setup the next state/page for the user. In the Wafer weblog example the user clicked on the link to view the comments for a story, passing along one variable called key which has the storyId in it. The perform method will load the story and then the comments into the controller. If any errors exist then the method returns the String of "ERROR". This is very similar to the code in Struts: return (mapping.findForward("error")). Just make sure that the String you are returning is either defined as a global view or is defined as an inline view for that <command> in the maverick.xml file. Behind the scenes, in the ThrowawayBean2 class, ViewComment, is being placed in the request under the key name of model. <%@ taglib</b> <br /> <br /> <c:out <br /> <br /> <a href="addComment.m?key=<c:out">Add Comment</a> </br> <hr/> <br /> <c:forEach <b><c:out</b> <br /> <br /> <c:out<br/> <br /> <br /> <a href="addComment.m?key=<c:out">Reply</a> <br/> <br/> </c:forEach> This snippet (above) is from viewComments.jsp. Notice how the controller-as-model is used on the JSP side. Here the variable called model is the ViewComment.java file and has getters for comments and story. Since we are using the ThrowawayBean2 Controller, which uses the controller-as-model pattern, any values you need to display on the JSP page will need to have getters associated with them such as Story and Comments (see the above JSP snippet). Depending on your needs this can be a great strength or weakness. If this page and controller were associated with a process state (i.e., submit of a form) then the controller would need to have setter methods for any input values that you would want to collect and process in the perform method. See the example AddCommentSubmit.java FormBeanUser is different from ThrowawayBean2 in that this does not follow the controller-as-model pattern. Here you pass to the controller the class that will be used to represent the model. This is helpful if you want to persist the model in the session or simply don't like the clutter of the controller-as-model pattern. ViewComments2 has the same functionality of ViewComment except it extends FormBeanUser. This Controller is similar to how Struts operates. public class ViewComments2 extends FormBeanUser { public String perform(Object form, ControllerContext cctx) throws Exception { HttpServletRequest request = cctx.getRequest(); MyFormBean2 formBean = (MyFormBean2)form; // Story id String key = (String)request.getParameter("key"); if(key == null || key.equals("")) { return ERROR; } WebLogStory weblogStory = WebLogStory.getWebLogStories(); Story story = weblogStory.getStory(key); formBean.setStory(story) ; formBean.setComments(story.findAllComments()); return SUCCESS; } public Object makeFormBean(ControllerContext cctx) { HttpServletRequest request = cctx.getRequest(); MyFormBean2 form = new MyFormBean2(); return form; } The big difference in the JSP would be the call to the form after the model. <c:out MyFormBean2 has the getters and setters for the properties of the page. ThrowawayFormBeanUser is a hybrid between ThrowawayBean2 and FormBeanUser in that it is instantiated like the ThrowawayBean2 controller but allows a separate class to act as the model instead of the controller. ControllerWithParams is a controller which can have parameters passed to it. For example you might need to do the following; <controller class="Login"> <param name="secure" value="true"/> </controller> Internationalization Maverick has chosen a very different route from the other frameworks to tackle this problem: they call it shunting. Shunts can be best viewed as switches, that switch the view depending on the locale set in the browser. These switches are referenced as modes. Below, I took the welcome command and added multiple views based on the mode set. If your browser is set to German then welcome_de.jsp will be displayed to the user. <command name="welcome"> <view name="success" mode="en" path="welcome.jsp"> <transform path="trimOutside.jsp"/> </view> <view name="success" mode="de" path="welcome_de.jsp"> <transform path="trimOutside.jsp"/> </view> </command> You still have the problem of error messages being displayed in a single language and now you have multiple pages to keep in sync when changes occur to the presentation; however, for simple solutions this is very easy. See the JavaDocs for all of the possible modes. Maverick Options While Maverick as a presentation framework provides all the basic needs to build web sites, it also has many optional features (available from their download page) that you can add to your project depending on your needs. This is where the extensibility of Maverick really shines through. Below is a list of each option: - Velocity - Provides Velocity support and examples - FOP - Create PDF files as your views - Domify - Originally created as part of Maverick but has since been separated out into a separate project In a nutshell, this allows you to turn you model into a DOM object and then use XSL to display it to a view - Betwixt - alternative to Domify - Perl - Run Perl through a Maverick transform type - Struts - provides tools to help migrate a Struts application to Maverick. Many of these packages include the friendbook example to help you get a head start on using it. Given the scope of this article, I won't go into much detail on every option but I thought that going over the Velocity and Domify options would be helpful. Velocity Velocity is a Java templating engine used by many frameworks like Turbine, WebWork, Jpublish, and with a few configuration changes from the default setting can be easily used by Maverick too. According to the Maverick manual, you do not need the velocity-opt, but this is simply not true. Download the velocity-opt package and copy the velocity-tools-view-0.6.jar and velocity-1.3-rc1.jar into your lib directory. Next, modify the web.xml file by adding the VelocityViewServlet and the .vm mapping definition. <servlet> <servlet-name>velocity</servlet-name> <servlet-class> org.apache.velocity.tools.view.servlet.VelocityViewServlet </servlet-class> <load-on-startup>10</load-on-startup> </servlet> <servlet-mapping> <servlet-name>velocity</servlet-name> <url-pattern>*.vm</url-pattern> </servlet-mapping> Lastly modify your maverick.xml as in the following snippet and you're ready to roll: <command name="welcome"> <view path="welcome.vm"> <transform path="trimOutside.vm"/> </view> </command> The opt-velocity download also provides a good friendbook example implemented with Velocity and the TransformFactory for the experimental DVSL technology that is developed by the Velocity team. Domify Domify is a "cheap" way to implement XSLT. Domify started out as part of the Maverick project but later broke away into its own project,. In a nutshell, Domfiy allows you to take the Maverick model object and turn it into a DOM object that can be used on the presentation side via XSLT. The opt-domify package has the friendbook example done in Domify for further inspection. Conclusion Having worked on many web projects myself and with many web frameworks I will consider Maverick to be a "keeper" in my toolbox of Java apps. Its features of being easy to use, easy to extend, and quick to build with make it my number one choice for simple apps much like the Wafer weblog application was. Besides Maverick's key features, I was also very impressed with the quality of the Maverick code itself. This stuff was not created by any amateurs, which is important if you want to extend it. While the community may not have been as large as with other frameworks, it is large enough to get help from the online community. Does Maverick mean an end to Struts? Absolutely not. Struts has many more features and capabilities than Maverick does; however, those features might not be of any interest to you. If you want to learn only one framework then maybe Struts is your solution but remember when holding a hammer not everything is a nail. Author Kris Thompson is the lead of a local java user group, in Boulder Colorado that focuses solely on web frameworks and is also a contributor on the Expresso framework and Wafer project. Email Kris at info@flyingbuttressconsulting.com.
http://www.theserverside.com/news/1364591/Introduction-to-Maverick
CC-MAIN-2014-10
refinedweb
2,528
56.35
Fennec - A test helper providing RSPEC, Workflows, Parallelization, and Encapsulation. Fennec started as a project to improve the state of testing in Perl. Fennec looks to existing solutions for most problems, so long as the existing solutions help meet the features listed below. Fennec versions below 1.000 were considered experimental, and the API was subject to change. As of version 1.0 the API is considered stabalized. New versions may add functionality, but not remove or significantly alter existing functionality. Forking in tests just plain works. You can fork, and run assertions (tests) in both processes. Encapsulated test groups can be run individually, without running the entire file. (See Test::Workflow) Encapsulated test groups can be run in parallel if desired. (On by default with up to 3 processes) See the "PARALLEL" for details about what runs in what processes. Tests groups can be sorted, randomized, or sorted via a custom method. (see Test::Workflow) Fennec is compatible with Test::Builder based tools. Test::Builder2 support is in-place, but experimental until Test::Builder2 is officially released. Fennec is configurable to work on alternatives to Test::Builder. You do not need to put anything such as done_testing() at the end of your test file. You do not need to worry about test counts. Annoyed when your test failure and the diagnostics messages about that test are decoupled? ok 1 - foo ok 2 - bar not ok 3 - baz ok 4 - bannana ok 5 - pear # Test failure on line 67 # expected: 'baz' # got: 'bazz' This happens because normal output is sent to STDOUT, while errors are sent to STDERR. This is important in a non-verbose harness so that you can still see error messages. In a verbose harness however it is just plain annoying. Fennec checks the verbosity of the harness, and sends diagnostic messages to STDOUT when the harness is verbose. Note: This is not IO redirection or handle manipulation, your warnings and errors will still go to STDERR. these test groups will be run in parallel. They will also be run in random order by default. See the "CONFIGURATION" for more details on controlling behavior. Also see Test::Workflow for more useful and poweful test groups and structures. If you use Fennec::Declare you can write tests like this: package MyTest; use strict; use warnings; use Fennec; tests foo { ok( 1, 'bar' ); } 1; Thats right, no => sub and no trailing ';'. 1: package MyTest; 2: use strict; 3: use warnings; 4: use Fennec; 5: 6: tests foo => sub { 7: ok( 1, 'bar' ); 8: }; 9: 10: tests another => sub { 11: ok( 1, 'something passed' ); 12: }; 13: 14: 1; In the above code there are 2 test groups, 'foo', and 'another'. If you wanted, you could run just one, without the others running. Fennec looks at the 'FENNEC_TEST' environment variable. If the variable is set to a string, then only the test groups with that string as a name will run. $ FENNEC_TEST="foo" prove -Ilib -v t/FennecTest.t In addition, you could provide a line number, and only the test group defined across that line will be run. For example, to run 'foo' you could give the line number 6, 7 or 8 to run that group alone. $ FENNEC_TEST="7" prove -Ilib -v t/FennecTest.t This will run only test 'foo'. The use of line numbers makes editor integration very easy. Most editors will let you bind a key to running the above command replacing t/FennecTest.t with the current file, and automatically inserting the current line into FENNEC_TEST. Insert this into your .vimrc file to bind the F8 key to running the current test in the current file: standard perl test library. One of the more useful test libraries, used to test code that throws exceptions (dies). Test code that issues warnings. Provides RSPEC, and several other workflow related helpers. Also provides the test group encapsulation. Quick and effective mocking with no action at a distance side effects. A Fennec class can also be a Test::Class class. If Fennec did not support this who would use it? There is currently experimental support for Test::Builder2. Once Test::Builder2 is officially released, support will be finalized. When tests run in parallel (default) the following notes should be observed. describe something => sub { ... } Blocks like the above run in the parent process, all will run BEFORE any test or state-building blocks. tests foo => sub { ... }; Test blocks like this will all run in their own process. before_each set_it => sub { ... }; tests test_it => sub { ... }; XXX_each will run in the same process as the test block, that is after the fork() call. before_all set_once => sub { ... }; tests test_it => sub { ... }; XXX_all will run in the parent process, that is before fork() is called to run the test block. case c1 { ... } case c2 { ... } test t1 { ... } test t2 { ... } This effectively builds 4 combinations to run: c1+t1, c1+t2, c2+t1, c2+t2. Each of these 4 combinations will be run in their own process. The case will run first, followed by the test block. Be aware, it is easy to define an exponential number of tests using the case+test combiner. There are 2 ways to configure Fennec. One is to specify configuration options at import. The other is to subclass Fennec and override the defaults() method. Configuration options: Provide a list of modules to load. They will be imported as if you typed use MODULE. You can specify arguments for each class like so: use Fennec utils => [ 'My::Util' ], 'My::Util' => [ 'Arg1', 'Arg2' ]; Specify the maximum number of processes Fennec should use to run your tests. Set to 0 to never create a new process. Depedning on conditions 1 MAY fork for test groups while still only running 1 at a time, but this behavior is not guarenteed. Default: 3 Specify the runner class. Default: Fennec::Runner Load test_groups and workflows from another class. This allows you to put test groups common to many test files into a single place for re-use. This sets the test sorting method for Test::Workflow test groups. Accepts 'random', 'sort', a codeblock, or 'ordered'. This uses a fuzzy matching, you can use the shorter versions 'rand', and 'ord'. Defaults to: 'rand' This will cause a test block to abort after a specified timeout (value is passed directly to alaram). NOTE This uses the alarm($timeout) function. If your tests include alarms the behavior is not defined. One will certainly clobber the other, your will most likely come out on top, but that is not guarenteed in any way. Only use this while debugging and remove it afterwords. Will shuffle the order. Keep in mind Fennec sets the random seed using the date so that tests will be determinate on the day you write them, but random over time. Sort the test groups by name. When multiple tests are wrapped in before_all or after_all the describe/cases block name will be used. Use the order in which the test groups were defined. Specify a custom method of sorting. This is not the typical sort {} block, $a and $b will not be set. use Fennec parallel => 5, utils => [ 'My::Util' ], ... Other Options ...; package My::Fennec; use base 'Fennec'; sub defaults {( utils => [qw/ Test::More Test::Warn Test::Exception Test::Workflow /], utils_with_args => { My::Util => [qw/function_x function_y/], }, parallel => 5, runner_class => 'Fennec::Runner', )} # Hook, called after import sub init { my $class = shift; # All parameters passed to import(), as well as caller => [...] and meta => $meta my %params = @_; ... } 1; This is a more complete example than that which is given in the synopsis. Most of this actually comes from Method::Workflow, See those docs for more details. Significant sections are in separate headers, but all examples should be considered part of the same long test file. NOTE: All blocks, including setup/teardown are methods, you can shift @_ to get $self. package MyTest; use strict; use warnings; use Fennec parallel => 2, with_tests => [qw/ Test::TemplateA Test::TemplateB /], test_sort => 'rand'; # Tests can be at the package level use_ok( 'MyClass' ); # Fennec works with Test::Class use base 'Test::Class'; sub tc_test : Test(1) { my $self = shift; ok( 1, 'This is a Test::Class test' ); } tests loner => sub { my $self = shift; ok( 1, "1 is the loneliest number... " ); }; tests not_ready => ( todo => "Feature not implemented", code => sub { ... }, ); tests very_not_ready => ( skip => "These tests will die if run" code => sub { ... }, ); Here setup/teardown methods are declared in the order in which they are run, but they can really be declared anywhere within the describe block and the behavior will be identical. => { ... }, ); }; Fennec add's to the RSPEC toolset with the around keyword. describe addon => sub { my $self = shift; around_each localize_env => sub { my $self = shift; my ( $inner ) = @_; local %ENV = ( %ENV, foo => 'bar' ); $inner->(); }; tests foo => sub { is( $ENV{foo}, 'bar', "in the localized environment" ); }; }; Cases are used when you have a test that you wish to run under several r" ); }; }; 1; Mock::Quick is imported by default. Mock::Quick is a powerful mocking library with a very friendly syntax. use Mock::Quick; my $obj = obj( foo => 'bar', # define attribute do_it => qmeth { ... }, # define method ... ); is( $obj->foo, 'bar' ); $obj->foo( 'baz' ); is( $obj->foo, 'baz' ); $obj->do_it(); # define the new attribute automatically $obj->bar( 'xxx' ); # define a new method on the fly $obj->baz( qmeth { ... }); # remove an attribute or method $obj->baz( qclear() ); use Mock::Quick; my $control = qclass( # Insert a generic new() method (blessed hash) -with_new => 1, # Inheritance -subclass => 'Some::Class', # Can also do -subclass => [ 'Class::A', 'Class::B' ], # generic get/set attribute methods. -attributes => [ qw/a b c d/ ], # Method that simply returns a value. simple => 'value', # Custom method. method => sub { ... }, ); my $obj = $control->packahe->new; # Override a method $control->override( foo => sub { ... }); # Restore it to the original $control->restore( 'foo' ); # Remove the anonymous namespace we created. $control->undefine(); use Mock::Quick; my $control = qtakeover( 'Some::Package' ); # Override a method $control->override( foo => sub { ... }); # Restore it to the original $control->restore( 'foo' ); # Destroy the control object and completely restore the original class Some::Package. $control = undef; Mock-Quick uses Exporter::Declare. This allows for exports to be prefixed or renamed. See "RENAMING IMPORTED ITEMS" in Exporter::Declare for more information. Create an object. Every possible attribute works fine as a get/set accessor. You can define other methods using qmeth {...} and assigning that to an attribute. You can clear a method using qclear() as an argument. See Mock::Quick::Object for more. Define an anonymous package with the desired methods and specifications. See Mock::Quick::Class for more. Take control over an existing class. See Mock::Quick::Class for more. Returns a special reference that when used as an argument, will cause Mock::Quick::Object methods to be cleared. Define a method for an Mock::Quick::Object instance. When you use Fennec, it will check to see if you called the file directly. If you directly called the file Fennec will restart Perl and run your test through Fennec::Runner. When running a test group by line, Fennec takes it's best guess at which group the line number represents. There are 2 ways to get the line number of a codeblock: The first is to use the B module. The B module will return the line of the first statement within the codeblock. The other is to define the codeblock in a function call, such as tests foo => sub {...}, tests() can then use caller() which will return the last line of the statement. Combining these methods, we can get the approximate starting and ending lines for codeblocks defined through Fennec's keywords. This will break if you do something like: tests foo => \&my_test; sub my_test { ... } But might work just fine if you do: tests foo => \&my_test; sub my_test { ... } But might run both tests in this case when asking to run 'baz' by line number: tests foo => \&my_test; tests baz => sub {... } sub my_test { ... } Chad Granum exodist7@gmail.com Fennec is free software; Standard perl licence. Fennec is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the license for more details.
http://search.cpan.org/~exodist/Fennec-1.015/lib/Fennec.pm
CC-MAIN-2013-20
refinedweb
2,022
75.1
Back to: LINQ Tutorial For Beginners and Professionals Linq Prepend Method in C# With an Example In this article, I am going to discuss the LINQ Prepend Method in C# with an example. Please read our previous article before proceeding to this article where we discussed the LINQ Append Method with an example. Linq Prepend Method in C#: The Linq Prepend Method is used to add one value to the beginning of a sequence. This Prepend method like the Append method does not modify the elements of the sequence. Instead, it creates a copy of the sequence with the new element. The signature of this is given below. Type Parameters - TSource: The data type of the elements contained in the sequence. Parameters: - IEnumerable<TSource> source: A sequence of values. - TSource element: The value to prepend at the beginning of the sequence. Returns: - IEnumerable<TSource>: A new sequence that begins with the element. Exceptions: When the source is null, it will throw ArgumentNullException. Note: This method is support from Framework 4.7.1 or later. Example: The following example shows how to prepend a value to the beginning of the sequence using the Prepend method. The following example code is self-explained. So, please go through the comment lines. using System.Linq; using System.Collections.Generic; using System; namespace LinqDemo { class Program { static void Main(string[] args) { // Creating a list of numbers List<int> numberSequence = new List<int> { 10, 20, 30, 40 }; // Trying to prepend 50 numberSequence.Prepend(50); // It will not work because the original sequence has not been changed Console.WriteLine(string.Join(", ", numberSequence)); // It works now because we are using a changed copy of the original list Console.WriteLine(string.Join(", ", numberSequence.Prepend(50))); // If you prefer, you can create a new list explicitly List<int> newnumberSequence = numberSequence.Prepend(50).ToList(); // And then write to the console output Console.WriteLine(string.Join(", ", newnumberSequence)); Console.ReadKey(); } } } Output: In the next article, I am going to discuss the LINQ ZIP Method with an example. Here, in this article, I try to explain the Linq Prepend Method in C# with an example.
https://dotnettutorials.net/lesson/linq-prepend-method/
CC-MAIN-2021-31
refinedweb
351
58.38
In the quest for more dynamic content, web server technologies have flourished. One particular solution to provide this dynamic content is Java Servlet technology. As a replacement to the traditional CGI script approach, servlets give developers a powerful tool to create web enabled applications. Not only does the servlet solution give developers the ease of using the Java language, it is also offers a more effecient solution in terms of CPU power. A variety of servlet engines have been implemented to take advantage of this rapidly maturing technology. However, in the majority of these products, the sheer price of the commercial servlet engines puts this technology out of the hands of developers without the cash to front for these products. Enter Apache, the internet’s most popular web server. The Apache group has already proven the ability of open source to produce high quality, mission critical software. The Apache-Jserv project is an open-source implementation of Suns’ servlet specification. For those of you that want to hack on this stuff at home on your linux box, to those that want to deploy servlet technology for business critical applications, Apache Jserv delivers an easily accessible, robust solution. Some of the benefits of servlets over CGI include: - Faster} Because servlets are kept loaded in the servlet engine’s JVM, each incoming request does not need to instantiate a new servlet class. Once the servlet environment is initialized, the servlet is then loaded and the servlet’s service() method handles requests. However, you should be aware that more than one instance of your servlet can be loaded in the JVM, depending on the server load and the configuration of your servlet engine. Each servlet must either directly or indirectly implement the javax.servlet.Servlet interface. Typically, it is most convenient to extend the javax.servlet.http.HttpServlet class and override the doGet() and/or doPost() methods. The servlet API also allows for handling other types of HTTP 1.1 requests, like DELETE, PUT, TRACE, and OPTIONS. Since these types of HTTP 1.1 requests are uncommonly used, most developers will be interested in the doGet() and doPost() methods. Two important classes your should know about are HttpServletRequest and HttpServletResponse. The HttpServletRequest object contains methods which let you inspect it and find information on the request type. In PERL CGI’s, a lot of these variables are found in the %ENV associative array; things like METHOD (getMethod()), REMOTE_ADDR (getRemoteAddr), CONTENT_LENGTH (getContentLength()), etc. Data passed to the servlet in forms is automatically parsed from the QUERY_STRING or what’s received from a POST request. These form variables are accessed by the getParameter(String name) method, where the name is the name of the variable set in the HTML form. The HttpServletResponse‘s most popular method would probably be its getWriter() method. This returns a PrintWriter to which you can println() output. If you servlet is going to be returning binary data, like a dynamically created gif or jpg, you would then need to use getOutputStrem() instead. Servlet Lifecycle -’ state, waiting to handle requests. Once a request comes in to which the servlet is mapped to service, the service() method is executed. Depending on the HTTP 1.1 request type (typically GET or POST), the service() method then passes the request on to the doGet() method if it is a GET request, or the doPost() method if it is a POST request. Usually POST requests are used for submitting data, while GET requests are for retrieving data. - destroy() Servlet engines are not required to keep the servlets in memory. If they should decide to unload a servlet to conserve resources or for what ever other reason, the servlet’s destroy() method will be called. This allows the servlet to do things like save its current state and release resources before being unloaded. Quick Example Here’s a quick example of how a simple servlet works (Hello.java). import javax.servlet.http.*; import javax.servlet.*; import java.io.*; public class Hello extends HttpServlet { public void doGet (HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // get an PrintWriter from the response object PrintWriter out = response.getWriter(); // prepare the response’s content type response.setContentType(“text/html”); // get the IP address of the client String remoteAddress = request.getRemoteAddr(); // print to the output stream! out.println(“Hello there, web surfer from <b>” + remoteAddress + “</b>”); } } The servlet is invoked when a URL mapped to the servlet ( like http://[domain]/servlet/Hello ) is requested. If it is not already loaded in the JVM, the servlet engine loads the servlet and calls the init() method. Now the servlet is ‘ready’ to handle requests, and the service( HttpServletRequest, HttpServletResponse ) method is now called. The service() method inspects the request object, and passes the HttpServletRequest and HttpServletResponse objects to the appropriate handler method. In this case, this servlet is only equipped to handle GET requests. If a POST or some other HTTP 1.1 request was handled by the servlet, the client browser would get a “<METHOD> not supported by this URL” error message. The servlet finishes executing the doGet() method and then waits for another request to service.
http://www.devshed.com/c/a/java/getting-started-with-java-servlets-using-apache-jserv/
CC-MAIN-2017-26
refinedweb
854
62.98
import some.modules def ineSomeFunctions(): pass class Whatever: pass def main(): ineSomeFunctions(Whatever()) if __name__ == '__main__': main() This works because the global __name__is set to "__main__"when evaluating the code in the file invoked on the command line. This has a problem, though. It also puts all of those functions and classes into a module named "__main__". Sometimes this isn't an issue, but usually it will become one. So what should you do instead? This: if __name__ == '__main__': import mymodule raise SystemExit(mymodule.main()) import some.modules def ineSomeFunctions(): pass class Whatever: pass def main(): ineSomeFunctions(Whatever()) It's probably possible to do even better than this, but even this simple change buys a lot - suddenly no more __main__wackiness. So, do it this way! I'm not sure either of how this could become a problem... would you please give us an example? In the solution you offer, are there two different files? The first one would end after the 'raise' and the second one begins with the 'import' but we do not see it with this blog engine. Does this solve the problem that the buildout entry point solve? I think he's saying you can do it with one file. That file is "mymodule.py", and that conditional at the top will trigger when mymodule.py is being evaluated as __main__, and import it as "mymodule" instead. It sounds a little weird that Python would evaluate the same module twice for an import, as mymodule and __main__, but perhaps that's one of the quirks that this is attempting to defend against. Hi Devin, I agree completely. Mutable global state is a bad, bad thing. :) Part of my motivation for this post, though, was to suggest a very simple alteration to a common idiom which results in a slight improvement in the resulting behavior. On the other hand, it's a *big* job to education someone enough so that they realize they should stop relying on globals. Introducing a second source file is also a good way to address this. But again, that's a slightly larger change than the one I proposed here. I wholeheartedly endorse that approach, but didn't cover it here because it's not quite as simple. By the way, there's also another class of problems that this change is meant to address. Even if you don't have mutable globals, defining functions and classes in the __main__ module gives them a funny name - __main__.Whatever. Using the definition imported from mymodule fixes that problem as well. This most often comes up as an issue when someone tries to pickle or otherwise serialize something from a __main__ module. A name like __main__.Whatever is much less likely to result in something that's recoverable, as compared to mymodule.Whatever. This problem is also fixed by your suggested solutions, of course. So I'll again second everything you said. :) By the way, I certainly do find this to be a common idiom. The Python standard library itself contains 674 occurrences of it! I think this is quite unfortunate. Hi Alan, I think Devin did a good job explaining one of the possible problems - two copies of some global mutable state diverging - in his comment above. In my reply to him, I mentioned another - that of naming and how this can interact poorly with serialization. Does that help? Yep, Kevin inferred my intent correctly. There is only one file in the solution I proposed. That's why it's raising SystemExit - to avoid having execution continue on through the rest of the file (plus it adds a feature - letting main specify the exit code, but that's secondary). I haven't used buildout, but I suspect that the entry point feature does solve this problem as well, and probably better than I did in my post. :) Since the buildout entry point takes responsibility for handling the script, it removes the need to have a __main__ check in your code, and I assume also removes the case where the implementation module is evaluated as the __main__ module.
http://as.ynchrono.us/2009/07/how-to-define-main-entry-point-into_20.html?showComment=1248205518000
CC-MAIN-2018-51
refinedweb
683
65.22
tag:blogger.com,1999:blog-81455935113622039822017-07-29T04:40:00.739-05:00PhilCali Code v0.1Trying something new.Philip You Speak Cronish?<p>A while back, I spoke about writing a <a href="">cron dsl</a>. Well, some progress has been made.</p> <a name='more'></a> <h2>Invocation</h2> <p>How do you talk about cron? Do you say that a job runs <em>every day at midnight</em>? Well if you do, then you need no further instructions on how to create your crons programmatically. Some example of some crons:</p> <p><strong>Update</strong>: I should have run the text in the console before output wrong data.</p> <pre class="brush: scala; toolbar: false;">"Every day at midnight".crons == "0 0 * * *"<br />"Every 14 days at midnight".crons == "0 0 */14 * *"<br />"Every day on the weekday in June, July, and August at 3:30".crons == "30 3 * 6,7,8 1-5"<br /></pre> <p>We all speak cronish to some degree!</p> <h2>The Syntax</h2> <p>One night, I decided to break down a cron statement into some determinable blocks. I came up with the following <a href="">grammar rules</a>:</p> <ol><li>Incrementing statements</li><li>Connectors</li><li>Values / Descriptors</li></ol> <h3>Incrementing Statements</h3> <p><em>Incrementals</em> is the keyword <em>every</em>. This produces the famous *. Any numeric value you punch in after <em>every</em> makes it one of these: <em>*/n</em>. A cron statement <em>must</em> start with one of these. Therefore, the shortest incremental time you can do, is something like:</p> <pre class="brush: scala; toolbar: false;">// second, minute, hour, day of month, month, day of week, year<br />"Every second".cron == Cron("*", "*", "*", "*", "*", "*", "*")<br /></pre> <h3>Connectors</h3> <p>The connectors I've chosen help make an English sentence regarding a cron job clear. Here's the run down:</p> <ol><li><code>at</code> refers only to time, which includes seconds, minutes, and hours. ie: <em>at 4am and 4pm</em></li><li><code>on</code> refers to day values, particularly day of week values. ie: <em>on Friday, Saturday, and Sunday</em></li><li><code>in</code> refers to month values. ie: <em>in January to May</em></li><li><code>on the nth</code> refers to day of month values. ie: <em>on the 1st, 2nd, 3rd, and 4th day</em></li><li><code>in the year n</code> is special syntax for a year value.</li></ol> <p>Connectors can be strung together with other connectors or incrementals. Some examples would be:</p> <pre class="brush: scala; toolbar: false;">"Every day at midnight in every month".crons ==<br />"Every month at midnight on every day".crons<br /><br />"every year in July on the weekend at 1pm-4pm".crons ==<br />"every day on the weekend at 1pm-4pm in July".crons<br /></pre> <p>Just follow the simple rules, and it's pretty easy to build your time.</p> <h3>Values / Descriptors</h3> <p>Descriptors are just more keywords we use in English to describe something. These are:</p> <ol><li><code>midnight</code> and <code>noon</code></li><li><code>n:n:n</code> for full clock description.</li><li><code>other</code> in the incrementals instead of <code>2</code>.</li><li><code>am</code> and <code>pm</code> for hours.</li><li><code>st</code>, <code>nd</code>, <code>rd</code> and <code>th</code> for days in the month.</li><li><code>weekend</code> and <code>weekday</code></li><li><code>last</code> applies for day of week and day of month.</li></ol> <p>The values are either numeric or English values for fields. Days are Monday - Sunday, months are January - December, and day of month are simply numbers. </p> <h2>Scala Cron Job</h2> <p>Describing a cron job in pure Scala is also done through dsl type speak. The syntax is heavily borrowed from sbt's task creation.</p> <pre class="brush: scala; toolbar: false;">val payroll = task {<br /> println("Getting paid today ... with virtual cash!")<br />} executes "every Friday on the last day in every month"<br /><br />// And we're off!<br />// Alternatively you could have written it like this<br /><br />val visitation = task {<br /> println("Greetings, my name is Mouth Mouthman.")<br />}<br /><br />visitation executes "every Wednesday at 6pm"<br />visitation executes "every Sunday at 3pm"<br /><br />// Kill a job with stop<br />payroll stop // Oh noes!<br /></pre> <h2>Where do we go from here?</h2> <p>There's a lot of work that still needs to happen before this could be released. <a href="">Cronish</a> is freely available at my github. I'm working towards a stable release.</p> <p>-- Philip Cali</p><img src="" height="1" width="1" alt=""/>Philip Problems<p>I ran across an awesome blog post today titled <a href="">Programming Problems To Improve Your Language Skills</a>. I never actually wrote anything in Haskell yet, but I thought that solving the <a href="">Ninety-Nine Lisp Problems</a>was a great way to start. That's the point of this post. There is, however, a goofy Scala snippet that's in order first.</p> <a name='more'></a> <h2>Scala Snippet</h2> <p>It's a funny thing, logical operators. In php and python, if you want to test that <code>this</code>something is true and <code>that</code> something is true, you'd simply write <code>if this and that</code>. Very straight forward. Those of us coming from C understand that <code>&&</code> and <code>||</code> represent <code>and</code> and <code>or</code> respectively. I never actually questioned it... that's just the way it was.</p> <p>My wife is taking a beginner CSC class at university, and when she saw <code>&&</code>, she simply read it as <em>and-and</em>. </p> <p><em>What's does "and-and" mean, and I never seen those vertical things before...</em></p> <p>A very natural response. In Scala, we can change this! </p> <pre class="brush: scala; toolbar: false;">class BooleanExpression(origin: Boolean) {<br /> def and (expr: => Boolean) = origin && expr<br /> def or (expr: => Boolean) = origin || expr<br />}<br /><br />implicit def boolean2BooleanX(some: Boolean) =<br /> new BooleanExpression(some)<br /></pre> <p>Now your conditional sound very natural:</p> <pre class="brush: scala; toolbar: false;">if (file.exists and something)<br /> println("Yeeeeee")<br /><br />val x = 5<br />if(x > 4 and (x < 10 or x > 100))<br /> println("We got it anyway")<br /></pre> <p>It is important to note that this is less efficient. It is, however, cool.</p> <h2>Haskell Problems</h2> <p>And here's the first 20 problems, almost. I skipped a couple, but I plan on going back to it. I tried not to use <code>Prelude</code> with all the pre-defined list functions, but it still doesn't hurt when the goal is to learn a language.</p> <pre class="brush: scala; toolbar: false;">-- problem 1<br />my_last :: [a] -> a<br />my_last (x:[]) = x<br />my_last (x:xs) = my_last xs<br /><br />-- problem 2<br />my_but_last :: [a] -> [a]<br />my_but_last (x:y:[]) = [x, y]<br />my_but_last (x:rest) = my_but_last rest<br /><br />-- problem 3<br />element_at :: [a] -> Int -> a<br />element_at xs i = head [y | (x, y) <- zip [0..] xs, x == i]<br /><br />-- problem 4<br />count :: [a] -> Int<br />count xs = foldl (\acc x -> acc + 1) 0 xs<br /><br />-- problem 5<br />rev :: [a] -> [a]<br />rev [] = []<br />rev (x:xs) = rev xs ++ [x]<br /><br />-- problem 6<br />palindrome :: (Eq a) => [a] -> Bool<br />palindrome xs = if(rev xs == xs) then True else False<br /><br />-- problem 7<br /><br />-- problem 8<br />compress :: (Eq a) => [a] -> [a]<br />compress [] = []<br />compress (x:xs) = [x] ++ compress(dropWhile(== x) xs)<br /><br />-- problem 9<br />pack :: (Eq a) => [a] -> [[a]]<br />pack [] = []<br />pack (x:xs) = [[x] ++ takeWhile (== x) xs] ++ pack(dropWhile(== x) xs)<br /><br />-- problem 10<br />encode :: (Eq a) => [a] -> [(Int,a)]<br />encode xs = [(count ls, head ls) | ls <- pack xs]<br /><br />-- problem 11<br /><br />-- problem 12<br />decode :: (Eq a) => [(Int, a)] -> [a]<br />decode xs = concat . map (\el -> let (times, i) = el in repl i times) xs<br /><br />-- problem 13<br /><br />-- problem 14<br />dupe :: [a] -> [a]<br />dupe [] = []<br />dupe (x:xs) = x : x : dupe xs<br /><br />-- problem 15<br />repl :: [a] -> Int -> [a]<br />repl [] _ = []<br />repl (x:xs) n = map (\_ -> x) [1..n] ++ repl xs n<br /><br />-- problem 16<br />dropr :: [a] -> Int ->[a]<br />dropr xs i <br /> | count xs < i = xs<br /> | otherwise = let (headr, (_:rest)) = splitAt (i - 1) xs in headr ++ dropr rest i<br /><br />-- problem 17<br />split :: [a] -> Int -> ([a], [a])<br />split xs n<br /> | n == 0 = ([], xs)<br /> | count xs < n = (xs, [])<br /> | otherwise = (take n xs, drop n xs)<br /><br />-- problem 18<br />slice :: [a] -> Int -> Int -> [a]<br />slice xs from to<br /> | from == 1 && to == count xs = xs<br /> | from > to = []<br /> | otherwise = [x | (index, x) <- zip [1..] xs, index >= from && index <= to]<br /><br />-- problem 19<br />rotate :: [a] -> Int -> [a]<br />rotate xs n = let index = if n < 0 then count xs + n else n<br /> (first, second) = split xs index <br /> in second ++ first<br /><br />-- problem 20<br />remove :: [a] -> Int -> [a]<br />remove xs n = [x | (i, x) <- zip [1..] xs, i /= n]<br /></pre><img src="" height="1" width="1" alt=""/>Philip it All Wrong<div><p>Development comes in spurts. Even though I'm in the middle of several personal projects, sometimes a simple change in pace really helps. I took a break from coding to dive in some <a href="">Haskell</a> (some break). I ran across an online book, published by Miran Lipovaca, that looked fun and interesting enough to learn the language: <a href="">Learn You a Haskell for Great Good</a>.</p> <p>I read through five chapters last night, only to discover that I <em>should</em> have learned Haskell before learning Scala. I now see how much concepts Scala has pulled from Haskell. My opinions on the experience thusfar will be expounded on after the jump.</p> <a name='more'></a> <h2>For the Great Good, Indeed</h2> <p>The more I understand about Haskell, the more I love it. Due to my time learning Scala, I feel very at home with the language. The point of this post is to observe similarities with the two, as I go about learning.</p> <p>The last chapter I read was about recursion, something I understand very well at this point. As Miran describes, the poster child for Haskell is the quicksort implementation.</p> <pre class="brush: scala; toolbar: false;">quicksort :: (Ord a) => [a] -> [a] <br />quicksort [] = [] <br />quicksort (x:xs) = <br /> let smallerSorted = quicksort [a | a <- xs, a <= x] <br /> biggerSorted = quicksort [a | a <- xs, a > x] <br /> in smallerSorted ++ [x] ++ biggerSorted<br /></pre> <p>Now a side by side Scala implementation...</p> <pre class="brush: scala; toolbar: false;">// Need this for Haskell like Ord type<br />implicit Ordered._<br /><br />def quicksort[A: Ordering](ls: List[A]): List[A] = ls match {<br /> case Nil => Nil<br /> case x :: xs =><br /> val (smaller, bigger) = xs partition(_ <= x)<br /> quicksort(smaller) ++ List(x) ++ quicksort(bigger)<br />}<br /></pre> <h2>The Breakdown</h2> <p>Time for a line by line comparison. </p> <pre class="brush: scala; toolbar: false;">// Haskell<br />quicksort :: (Ord a) => [a] -> [a] <br />// Scala<br />def quicksort[A: Ordering](ls: List[A]): List[A]<br /></pre> <p>The first line is Haskell's way of defining a function. <code>quicksort</code> takes a single parameter, which is a list of type <code>a</code> defined to be <code>Ord</code>. It will return a list that is type <code>a</code>. More specifically, it will return a sorted one.</p> <p>The Scala implementation does just as well.</p> <pre class="brush: scala; toolbar: false;">// Haskell<br />quicksort [] = [] <br />// Scala<br />ls match {<br /> case Nil => Nil<br /></pre> <p>The seeming redefinition of <code>quicksort</code> is simply constructing a pattern match. That bit of Haskell specifically states: <em>Given an empty list, an empty list will be returned</em>.</p> <p>In Scala, you have to be a bit more explicit in your declaration of a pattern match, initiated by the keyword <code>match</code>. A <code>match</code> expression will return something (in this case, we've explicitly defined its type as a <code>List[A]</code>, so we better do that). Staying true by our <em>line</em> by line analysis, should <code>ls</code> be <code>Nil</code> (or empty), return empty.</p> <pre class="brush: scala; toolbar: false;">// Haskell<br />quicksort (x:xs) = <br />// Scala<br /> case x :: xs =><br /></pre> <p>Should the input parameter be a non-empty list, extract the head of the list from its remainder, and do some work. A singleton list will be extracted thusly:</p> <pre class="brush: scala; toolbar: false;">x:[]<br /></pre> <p>Scala supports pattern matching and extraction in its pattern matching very similarly. If a Scala object defines an <code>unapply</code> method then you can take advantage of this behavior. Once again, we see how Scala caters to this way of thinking.</p> <pre class="brush: scala; toolbar: false;">// Haskell<br /> let smallerSorted = quicksort [a | a <- xs, a <= x] <br /> biggerSorted = quicksort [a | a <- xs, a > x] <br />// Scala<br /> val (smaller, bigger) = xs partition(_ <= x)<br />// Optional rewritten like so<br /> val smaller = quicksort(xs.filter (_ <= x))<br /> val bigger = quicksort(xs.filter (_ > x))<br />// Scala for comprehensions<br /> val smaller = quicksort(for(a <- xs; if a <= x) yield(a))<br /> val bigger = quicksort(for(a <- xs; if a > x) yield(a))<br /></pre> <p>Haskell is binding some variables to use in its <code>in</code> statement which we'll go into in a minute. Haskell has incredible list comprehensions. You can filter, pattern match, use <code>let</code> expressions, etc.</p> <p>Scala's answer to Haskell's list comprehensions, is the <code>for</code> comprehension. For this example, though, it's a bit overkill. <code>List</code>'s in Scala inherit from a <code>Traversable</code> type which comes with powerful operations: <code>map</code>, <code>zip</code>, <code>filter</code>, <code>reduce</code>, to name a few.</p> <p>We're almost done.</p> <pre class="brush: scala; toolbar: false;">// Haskell<br /> in smallerSorted ++ [x] ++ biggerSorted<br />// Scala<br /> quicksort(smaller) ++ List(x) ++ quicksort(bigger)<br />// Or other implementations above<br /> smaller ++ List(x) ++ bigger<br /></pre> <p>Haskell will now use the bindings set before and work. Haskell will do its work lazily, which is awesome.</p> <p>Scala list concatenation with other <code>List</code>s is done exactly the same way as Haskell. </p> <p>Pretty cool. I'm looking forward to learning more.</p> <p>-- Philip Cali </p></div><img src="" height="1" width="1" alt=""/>Philip Documentation Server<p>What does this even mean? Well, it means I'm really cheap, and lazy. It probably means I'm silly, too, when I tell you what I'm doing.</p> <p>First, a back story is in order. I was reading some internet articles the other day, when completely irrelevantly, I thought of making my <a href="">Public Dropbox</a> folder a static web site. Dropbox has a really neat desktop tool that syncs a folder on your machine to their cloud storage, <em>automagically</em>. Being that the site would be static, I started to wonder what I could even do with it. Then I realized: this is perfect for a documentation server! (Maybe not!)</p> <a name='more'></a> <h2>Step 1: Getting the Docs</h2> <p>Sbt is a wonderful build tool. It can generate really nice API documentation with a single command. The only thing about these docs, is that they're not pretty to look at. <a href="">Docco</a> on the other hand, produces some really nice looking documentation. The <a href="">circumflex crew</a> already ported their re-imagination of docco style documentation for scala. </p> <p>I made quick work into building a sbt plugin that simply wraps the circumflex batch docco functionality with a single <code>docco</code> action within sbt.</p> <h2>Step 2: the Plugin</h2> <p>Like I mentioned before, all this plugin (initially) would need to do, is piggy back off the circumflex docco project to do batch docco processing on the project's <code>src</code> directory. Enter <a href="">sbt-cx-docco</a>.</p> <p>In a developer's project definition, they can route the docco output to a specific folder. Naturally, for this experiement, I routed it to my Public Dropbox folder on my local hard drive.</p> <p>I am thinking more about this, and I have more plans for this plugin that involves severing my circumflex dependency. You may or may not see about this in the future.</p> <h2>Step 3: ???</h2> <p>I have no idea what else I have planned for this thing. I'm thinking about setting a monido that looks at the folder, and adds the project to the list for me. This depends on how lazy I get, I guess. </p> <h2>Step 4: Profit!</h2> <p>A free static web server. Pretty good, Dropbox... Pretty good.</p> <p>-- Philip Cali</p><img src="" height="1" width="1" alt=""/>Philip, What?<p>I've been working on a little project called <a href="">Monido</a>, which is simply a monitoring service for Scala. It can monitor anything from a file, a directory, a DB table, or web page. I think you get the picture.</p> <p>The project, however, only ships with a single reference implementation to monitor the file system for changes. I'm using it for this very post at this moment. More on that later.</p> <a name='more'></a> <h2>The Program</h2> <p>I explained in an <a href="">earlier post</a> that I write my blog entries in markdown, and store them in a git repo. I have since modified that <code>BloggerBot</code> script to only inject the html source in my clipboard when it runs. The command looks like:</p> <pre class="brush: scala; toolbar: false;">java -jar blogbot.jar ~/notes/posts/24.md<br /></pre> <p>Monido can automate this procedure:</p> <pre class="brush: scala; toolbar: false;">monido ~/notes/posts -e java -jar blogbot.jar<br /></pre> <p>Every time I save this post, I get:</p> <pre class="brush: scala; toolbar: false;">Writing conversion to your clipboard ....<br />Done.<br /></pre> <p>This saves me a few key strokes!</p> <p>The <code>-e</code> argument for monido is essentially like using the <code>pipe</code> for a Unix system. The full file path of the changed file is passed to the terminal argument, and immediately run. Monido defines itself as a pure JVM, cross-platform solution to file monitoring.</p> <pre class="brush: scala; toolbar: false;">monido ~/some/other/thing -e ls -l<br /></pre> <p>Monido can spawn recursive monitors, although this practice is discouraged on using a large directory tree. Your operating system does this for you, so you should use that instead. <a href="">Jnotify</a> is a good Java wrapper around these OS events. </p> <p>Monido is <em>not</em> just a file monitoring service, though! This is where the good part begins.</p> <h2>The Library</h2> <p>Monido's infrastructure is very flexible, being that it's comprised of two (optionally three), components. These components are:</p> <ol><li><code>PulsingComponent</code></li><li><code>MonitorComponent</code></li><li><code>ListeningComponent</code> (optional)</li></ol> <p>The explanation of these components are given on the <a href="">Monido</a> github page:</p> <blockquote> <p>A <code>PulsatingComponent</code> is something that wakes the MonitorComponent to do something. A <code>PulsatingComponent</code> could wake it at set intervals, once a day, manually, etc. The <code>MonitorComponent</code> will simply monitor whatever it was told to monitor (The <code>FileMonido</code> reference implementation monitors the file system.) Optionally, the Monitor can notify a client of a change (or anything else really) by making use of the <code>ListeningComponent</code>.</p> <p>Lots of moving parts that have arbitrary dependencies make it a great candidate for some DI.</p></blockquote> <p>Now, I will show you how to do a <code>DBMonido</code>, just for fun.</p> <pre class="brush: scala; toolbar: false;">import com.github.philcali.monido._<br /><br />// First we need to define the DBMonitor by extending the<br />// MonitorComponent.<br />trait DBMonitorComponent extends MonitorComponent {<br /> this: ListeningComponent[Adapter] =><br /> // Let is connect via a db string<br /> class DBMonitor(db: String) extends SimpleMonitorDevice{<br /> def connect: Adapter = // connect to db<br /> def pulsed {<br /> // Making a fictional query<br /> val results = connect.query("SELECT COUNT(id) as count from entries")<br /> if(results("count") >= 9000) listener.changed(connect)<br /> }<br /> }<br />}<br /><br />// Define Monido now<br />object DBMonido extends Monido with ListeningComponent[Adapter]<br /> with DBMonitorComponent <br /> with PulsatingComponentImpl {<br /><br /> // This object will NOT compile until dependencies are met<br /> val listener = new MonidoListener {<br /> def changed(db: Adapter) {<br /> // Clear records<br /> db.query("TRUNCATE entries")<br /> }<br /> }<br /><br /> // the MonitorComponent expects a monitor value to be defined<br /> val monitor = new DBMonitor("jdbc:sqlite:test.db")<br /> // Next the PulsatingComponent expects a pulsar value to be defined<br /> // Sneal peak at 0.2 Scalendar syntax<br /> import com.github.philcali.scalendar._<br /> val pulsar = new Pulsar(1.day.inMillis)<br />}<br /><br />DBMonido.start<br />// And we're off!<br /></pre> <p <code>CronPulsing</code> implementation that uses a cron string for its configuration.</p> <pre class="brush: scala; toolbar: false;">// Re-define the DB monido<br />object DBMonido extends Monido with ListeningComponent[Adapter]<br /> with DBMonitorComponent <br /> with CronPulsing {<br /> ...<br /> // same as before. The monitoring component worked great, so<br /> // leave it alone.<br /><br /> // Sneak peak at the cron project<br /> import com.github.philcali.cron._<br /> val pulsar = new CronPulsar("every Sunday at midnight".cron)<br />}<br /><br />DBMonido.start<br />// And we're off!<br /></pre> <p>So there's your <code>Monido</code>. I don't know how useful it could be to everyone else, but I find myself in need of <code>Monido</code>'s pretty often.</p> <p>-- Philip </p><img src="" height="1" width="1" alt=""/>Philip v 1.0<h3>Update</h3> <p><em>It is now hosted on both <a href="">scalendar</a>'s github page, and on <a href="">scala-tools</a>. It was approved a couple of days ago, and hopefully, other people might find it useful.</em></p> <p>As I worked on my wife's ipad app, I kept refining my Scala Calendar wrapper (which I coined Scalendar thanks to <a href="">brad</a>). It began to evolve into a really nice api, in which you work with immutable objects that wrap the Java Calendar api. While it completely interoperates with Java time, it works well enough as a stand-alone library.</p> <p>The main functionality like date traversal and property getters haven't changed from the <a href="">last post</a>. If you're interested, read more about the changes after the jump.</p> <a name='more'></a> <h2>Construction</h2> <p>My biggest issue with the version I was working with, was the ability to <em>make</em>time. As I began to use it throughout the app, I began to realize rather quickly that it was indeed the biggest issue.</p> <pre class="brush: scala; toolbar: false;">// Now to convert from strings, you need to define<br />// a SimpleDateFormat, or just use the Pattern object<br /><br />import com.philipcali.utils.calendar._<br /><br />// If you don't plan on converting strings<br />// to time, then simply import<br />// import com.philipcali.utils.Scalendar<br /><br />// A global implicit will do<br />implicit val pattern = Pattern("M-d-yyyy")<br /><br />val monthsAlive = ("11-19-1985" to Scalendar.now).delta.months<br /><br />// Creation from now<br />val now = Scalendar.now<br /><br />// "Setters"<br />// It completely integrates with java.util.Calendar<br />import java.util.Calendar._<br /><br />val endOfWorld = now.year(2012).month(DECEMBER).day(21)<br /><br />val countdown = endOfWorld to now<br /><br />"t %d days" format(countdown.delta.days) // t -667 days<br /><br />"There's only %d months left!" format(countdown.reverse.delta.months)<br /><br />// Optional, the "make time" syntax:<br />val made = Scalendar(year = 2012, month = DECEMBER, day = 21)<br />val madeMore = Scalendar(year = 2012, <br /> month = DECEMBER, day = 21, hour = 23)<br /><br />made < endOfWorld // true, made 0's out anything below the day<br />madeMore < endOfWorld // false, because hour was set<br /><br />// Of course time can be built with unix timestamp<br />val fromLong = Scalendar(Scalendar.now.time)<br /><br />// Below is an eample of interaction with a DB and<br />// formatting the results via a duration traversal<br />val duration = Scalendar.now to Scalendar.now + (3 months)<br /><br />// Fictional DBO querying<br />val events = Event.find (<br /> Event.userid === "philip.cali",<br /> Event.created > duration.start.time,<br /> Event.created < duration.end.time <br />)<br /><br />// Format events for each month<br />duration.traverse(1 month) { monthD => <br /> // It's entirely possible to get more granular<br /> // but that's overkill for this example<br /> println("Events in %s" format(monthD.month.name))<br /> events.filter(e => monthD contains(e.created)) foreach(println)<br />}<br /></pre> <p>I got rid of the hasNext method, as that typically requires state, and the Scalendar class is completely immutable. I've made use of Scala 2.8 <a href="">package objects</a>, and completely realized how valuable it was when I wanted to save imports by placing my implicit conversions in there. Now the clients get a rather robust date api just by importing Scalendar.</p> <p>The code will be on my <a href="">github</a> very soon. I have to rip the api out of the project I'm currently using it in and create a separate project. One gave birth to the other.</p><img src="" height="1" width="1" alt=""/>Philip time<p>Handling time using the standard Java libraries can be a real pain. I am writing a webapp that uses pretty complicated date arithmetic, and while Java's <a href="">Calendar</a>object <em>works</em>, clients of the library are forced to work with a mutable object, thus being rather verbose to accomplish a simple task.</p> <a name='more'></a> <p>Enter <strong>ScalaCalendar</strong> to save the day! I began writing a Scala wrapper for the Java Calendar object, and I'm mostly happy how it turned out. Here's an example of how it works:</p> <pre class="brush: scala; toolbar: false;">// Importing the base ScalaCalendar object like so<br />// allows you to do some pretty powerful things.<br />import com.philipcali.calendar.ScalaCalendar._<br /><br />// Let's work with the simple things<br />val rightNow = java.util.Date() // or createTime() or "now"<br />val tomorrow = rightNow + 1 day<br />val nextMonth = rightNow + 1 month<br />val yesterday = rightNow - 1 day <br /><br />// Generate a duration is rather simple too<br />val span = rightNow to (rightNow + 1 week)<br /><br />rightNow isIn span // returns true<br /><br />rightNow isWeekend<br />rightNow isWeekday<br /><br />span.delta.days // returns 7<br />span.delta.months // returns 0<br /><br />// Calendar fields<br />// Your standard field have a name method<br />// which tries to define the field in English<br />rightNow.time == rightNow.millisecond<br />rightNow.second.value<br />rightNow.minute.value<br />rightNow.hour.value<br />rightNow.day.value<br />rightNow.day.inWeek<br />rightNow.day.inYear<br />rightNow.week.value<br />rightNow.week.InYear<br />rightNow.month.value<br />rightNow.year.value<br /><br />// Work with strings<br />valgithub</a> account very soon.</p> <p>-- Philip Cali</p><img src="" height="1" width="1" alt=""/>Philip Programming Languages<p>I ran across <a href="">this post</a> a few months ago, that I recently given some more thought to while making supper. (Funny how the brain works isn't it?) I've come up with my "three" programming languages as of this writing, and I'd like to share them with you.</p> <a name='more'></a> <p>The author classifies the three programming languages every one must know:</p> <ul><li>The Happiness Language</li><li>The Hack-it-out / Get Things Done language</li><li>The Bread and Butter language</li></ul> <p>I disagree with the author that there needs/should to be a difference for all three languages. In my experience, there's really no difference between the <strong>Happiness</strong> and <strong>GTD</strong> language. To me, if you can quickly solve the problem with the language in your head, then you can simply type it out and execute it <em>quickly</em>, thus getting it done. Anyway...</p> <p>I think there are three languages, though:</p> <ul><li>The language you can quickly solve problems with (call it Happiness, GTD, whatever). For me, this is <a href="">Scala</a>. This language makes me happy, and I've recently wrote a script that calculates an average for a dataset for work rather quickly.</li><li>The language you get paid to work in. While I can solve problems the fastest in Scala, I'm glad there a distinction for me. I get paid to work in PHP, and it makes my stomach turn a little to define a body of an anonymous function with a string (pre 5.3 code base). I'm glad my precious Scala is free from the political stress and red tape at my 9 to 5. This keeps Scala safe and happy.</li><li>The language you desire to learn more about. Having used Scala for more than two years, I can honestly say I wish I knew Scala more than I do now. I learn something new about the language on every new personal project I work on, and I feel I'm only starting to scratch the surface of some greater potential.</li></ul> <p>So there you have it. I do believe broadening your programming language knowledge makes you a more versatile problem solver, <em>to a certain extent</em>. Ultimately, I think problem solving skills are sharpened like any other skill set: <em>solving problems</em>. People just use learning a programming language as an excuse to tackle a new problem differently.</p> <p>On a slightly different note, I laughed aloud when someone commented on aforementioned blog post: <em>Javascript is your happiness language? Are you a serial killer?</em> </p> <p>I would say that with PHP, as I think looking at variables starting with the dollar sign is torture to the visual sense. When I program in PHP the goal is to make it as terse as possible with a billion helper functions to avoid those awful dollar signs. </p> <p>-- Philip Cali </p><img src="" height="1" width="1" alt=""/>Philip needs a Template Engine?<p>My wife commissioned me for another web app this weekend, and we spec'd it out using an app on her iPad (We both had a lot of fun). I wanted to prototype this sucker as quick as possible, and to be frank, my template engine I wrote for <a href="">Scalatra</a> wasn't cutting it. It dawned on me rather suddenly, something my <a href="">former co-woker</a> mentioned to me in passing. </p> <p>"Scala supports in-line XML literals... Why not just go with that?"</p> <p>And here I am almost a year later, saying to myself: <em>Huh... you have an excellent point....</em></p> <a name='more'></a> <p>Why not, indeed. Using raw Scala <em>as</em> my template engine, I get some great benefits right out of the box, which I will elaborate a little later. First I want to address the big negative aspect.</p> <h3>Web Designers would hate me</h3> <p>If a web designer (non-Scala coder) had to build an interface using this method, they would probably shoot me, or want to anyway. The purpose of a web template is explained quite beautifully <a href="">here</a>. And while this new proposed method achieves this goal, but they'd have to compile Scala code to see changes. Ouch!</p> <p>Let's take a look at what I mean, though. Let's just say, a designer was instructed to format a blog post. All they would have to do is this:</p> <pre class="brush: scala; toolbar: false;">object BlogPost extends Template[Post] {<br /> def template(post: Post) = {<br /> <div class="post"><br /> <div class="post-header"><br /> <h1>{ post.title }</h1><br /> </div><br /> <div class="post-body"><br /> <p><br /> { post.body }<br /> </p><br /> </div><br /> </div><br /> }<br />}<br /></pre> <p>Now in a Scalatra servlet, rendering this template is rather simple:</p> <pre class="brush: scala; toolbar: false;">get("/single/:id") {<br /> val post = Post.get(params("id"))<br /> BlogPost(post)<br />}<br /></pre> <h3>Scala *is* the Template Engine</h3> <p>Why bother building functionality for conditionals and loops, when the language supports this already. All I had to do was build a simple inheritance system that makes it easy to wrap templates in templates (for headers and footers and such). I was amazed at some of the implicit benefits this granted me:</p> <ol><li>I know if my template is valid <strong>at compile-time</strong>! Scala won't even compile invalid XML literals. Very cool.</li><li>My templates can be <strong>Unit Tested</strong>! Not sure how useful this is yet, but it's possible :P</li><li>I don't have to learn another web template language! Hoo-ray!</li></ol> <p>Now, I'll spend less time talking about this switch, and more time prototyping the app.</p> <p>Just for kicks, let's see more of it in action:</p> <pre class="brush: scala; toolbar: false;">object HomePage extends WrappedTemplate {<br /> val master = MasterTemplate <br /><br /> def wrapped(context: Context) = context match {<br /> case Context(_, posts: List[Post]) =><br /> <div id="content"><br /> <div id="posts"><br /> { posts.map(BlogPost(_)) }<br /> </div><br /> </div><br /> }<br />}<br /><br />object MasterTemplate extends ParentTemplate {<br /> // I don't really like this, but it works for now<br /> def template(data: (WrappedTemplate, Context)) = data._2 match {<br /> case Context(title: String, _*) =><br /> <html><br /> <head><br /> <title>{ title }</title><br /> </head><br /> <body><br /> { wrapped(data) }<br /> </body><br /> </html><br /> }<br />}<br /><br />// In Scalatra servlet<br />...<br />get("/") {<br /> val posts = Post.find()<br /> HomePage(Context("title" -> "Home", "post" -> posts))<br /> // Optionally, use can use this shorthand<br /> // HomePage(NoKeyContext("Home", posts))<br />}<br /></pre> <p>(FYI, I'm not prototyping a blog. Code here is strictly for example purposes.)</p> <p>-- Philip Cali</p><img src="" height="1" width="1" alt=""/>Philip Control v0.1 Alpha<p>The remote control package has finally reached an alpha stage today. The library includes ways for a server program to give client programs a way to change its behavior on the fly, or to run arbitrary code on command, and receive values from those arbitrary commands.</p> <a name='more'></a> <p>It all started <a href="">here</a>, which brought us <a href="">there</a>. The <a href="">last post</a> in the series provided a way to inject a program's context into potentially dynamic portions of the server application. I have since refactored the library to make the client side interaction much easier.</p> <p>I thought it would be easier for me to describe the library calls and use cases with code samples.</p> <h2>Dynamic</h2> <p>The <code>dynamic</code> call has been refactored into something that makes more sense than the previous way.</p> <pre class="brush: scala; toolbar: false;">// Back to the old way, sweet!<br />dynamic('code) {<br /> println("Potentially dynamic code, without context preservation")<br />}<br /><br />// With Context<br />valupdate('code) { ctx =><br /> println("There is no context, so I will speak in riddles")<br />}<br /><br />// The update pulling values from the context<br />// with the key<br />update('code) { ctx =><br /> val name: String = ctx("name")<br /> println("%s and I" format(name))<br />}<br /><br />// The update applying a partial function <br />// and uses Scala extractors to pull values<br />update('code) {<br /> case Context(name: String) => <br /> println("Your name is not %s" format(name.reverse))<br />}<br /></pre> <h2>Reset</h2> <p>I've given the client the ability to reset the dynamic portion to its default state. A simple call to <code>reset</code> will do it.</p> <pre class="brush: scala; toolbar: false;">// Back to "My name is Charlie Murphy"<br />reset('code)<br /></pre> <h2>Prepend and Append</h2> <p>There may be times a client doesn't want to change the dynamic portion, only add something directly before or after it. That's where <code>prepend</code> and <code>append</code> come in handy. For example:</p> <pre class="brush: scala; toolbar: false;">// Takes a Context like update<br />prepend('code) { ctx =><br /> println("Hey!")<br />}<br /><br />append('code) { ctx =><br /> println("What's yours?")<br />}<br /><br />// When dynamic is called it will now print<br />// Hey!<br />// My name is Charlie Murphy<br />// What's yours?<br /></pre> <h2>Remote Control</h2> <p>That's it as far as changing a server's behavior from a client program. The next part of this post is show to how <em>control</em> a server from a client program. The main difference between changing behavior and controlling a server, is what the client can expect from a command. If you are still confused, think of it this way:</p> <p>If you worked in the fast food industry, and your manager told you until further notice to inform customers who order a specific menu item, "We don't serve your kind here." Star Wars references aside, you, the application, changed your original behavior from serving everyone to rejecting those who order a specific item on the menu. Let's change gears now.</p> <p>Your manager is thirsty, and yells, "Hey, Mikie, pass me a cold one!" He was looking at you when he spoke, so even though your name isn't <strong>Mikie</strong>, you do what your told immediately.</p> <p>That is the key difference between behavioral modifications and commands.</p> <p>Any server application can handle remote commands by inheriting from the <code>RemoteControlled</code> trait. An easier way to see this is through the scala irc robot.</p> <pre class="brush: scala; toolbar: false;">import remote.RemoteControlled<br /><br />class Scalabot extends Pircbot with RemoteControlled {<br /> override def</a> <p>The signature changed from:</p> <pre class="brush: scala; toolbar: false;">dynamic('symbol) {<br /> // Block of code<br />}<br /></pre> <p>To:</p> <pre class="brush: scala; toolbar: false;">dynamic('symbol, Context("variable" -> variable)) { ctx =><br /> // Block of code<br />}<br /><br />// Or if context is unimportant<br />dynamic('symbol) { ctx =><br /> // Block of code<br />}<br /></pre> <p>The update signature hardly changed. A context is always passed into a dynamic block now. It could be an empty context, but a context nonetheless.</p> <pre class="brush: scala; toolbar: false;">update('symbol) { ctx =><br /> if(!ctx.params.isEmpty) {<br /> // Grab stuff from context, and operate<br /> val x = ctx("variable")<br /> // do something with x<br /> }<br />}<br /></pre> <p>I'll show you how I used it to turn my ScalaBot against its master. It's always more fun to witness the AI get a mind of its own, and destroy everything. Taking a look at the dynamic block from yesterday, let's preserve the context.</p> <pre class="brush: scala; toolbar: false;">dynamic('message, Context("bot" -> this, "sender" -> sender)) { ctx =><br /> println("A message posted from %s" format(sender))<br />}<br /></pre> <p>Once again, I chatted a couple times, before I turned my bot against me.</p> <pre class="brush: scala; toolbar: false;">import ircbot.ScalaBot<br />import hotcode.HotSwap._<br /><br />object Main extends Application {<br /> update('message) { ctx =><br /> val bot: ScalaBot = ctx("bot")<br /> val sender: String = ctx("sender")<br /><br /> sender match {<br /> case "schmee" => bot.sendMessage("#pircbot", "schmee is a loser. Don't listen to him")<br /> case _ => println(sender)<br /> }<br /> }<br />}<br /></pre> <p><a href=""><img src="" alt="Screenshot" /></a></p> <p>This time I truly changed the bot's behavior, because I have a reference to the bot itself. I knew before going into this, that the presentation of this library would take a hit, but I thought the end result was worth it. I've decided to <em>sell</em> this library a little differently. It's more of a remote control library. Hot swapping is not possible with it, but I like what it does.</p> <p>I have an idea of some additional code to allow more remote controlling. Will get to that later.</p> <p>-- Philip Cali</p><img src="" height="1" width="1" alt=""/>Philip Solutions via ScalaBot<p>I was talking about a pseudo hot swapping solution in my <a href="">last post</a>. I briefly spoke of its potential, but at the time of its writing, I wasn't even sure it would solve problems. I decided a good way for me to find out if it was capable of solving those problems, would be to build something with it.</p> <a name='more'></a> <p>The first decent scenario I could think of, was to program a little ircbot whose behavior I could change on the fly. The more I thought it, the more I was convinced that it would be a perfect fit. So here's the spiel:</p> <p>Let's say I want to read the #scala irc logs every day. Yes, I'm that much of a dork. This much of the requirement I knew into building the program, and hurried to launch the app. Well, after three days of reading logs, I'd like for my little robot to do something a little different upon each message it receives. In a normal situation, this would require me to logout of the channel, and put the other up. I <em>could</em> potentially miss a nugget from <a href="">Martin Odersky</a>, which could have horrible consequences. So, the best case scenario would be if I could have my changes without missing a beat.</p> <p>Now that I got my scenario lined up, I went to look for some java irc client out there. After evaluating <a href="">several</a> irc <a href="">libraries</a> out <a href="">there</a>, I ran across <a href="">PircBot</a>. It was hosted on maven, which was a win for me and sbt.</p> <p>I threw together this simple class (and look, it uses my <a href="">db adapter</a>!), which simply logs into an irc channel, and stores every message into a sqlite database:</p> <pre class="brush: scala; toolbar: false;">import org.jibble.pircbot.PircBot<br />import hotcode.HotSwap._<br /><br />class ScalaBot(name: String) extends PircBot {<br /> this.setName(name)<br /> this.setLogin("philip")<br /><br /> val db = Adapter("org.sqlite.JDBC", "jdbc:sqlite:chats.db")<br /><br /> db.exec("""CREATE TABLE IF NOT EXISTS chats (<br /> timestamp INTEGER,<br /> sender STRING,<br /> msg TEXT)""")(_.execute()) <br /><br /> override def onMessage(c: String, sender: String, login: String, h: String, msg: String) {<br /> db.exec("INSERT INTO chats VALUES (?, ?, ?)") { stm =><br /> stm.setLong(1, System.currentTimeMillis)<br /> stm.setString(2, sender)<br /> stm.setString(3, msg)<br /> stm.execute()<br /> }<br /><br /> dynamic('message) {<br /> println("Post hook for dynamic message processing")<br /> }<br /> } <br />}<br /></pre> <p>Yes! Raw SQL... <a href="'s_advertising">I'm lovin' it</a>! (<em>Clears throat</em>)</p> <p>I want to draw your attention to the dynamic call. Notice, right now, it's wrapping a trivial println statement. I'm the kind of programmer, who programs back-doors in every application. Can you Imagine the security risks? I digress...</p> <p>I launch the application as is, because it solves my first problems. I now want to change it's behavior after it stores a message. Here's where we get to the good stuff.</p> <p>I used freenode's <a href="">web app</a> to log in as a human. There's an empty channel (thankfully) at #pircbot where you can test your bots. A playgoround full of robots. I chatted some random things as the user "schmee", and sbt printing out what you'd expect:</p> <pre>[info] Post hook for dynamic message processing<br />[info] Post hook for dynamic message processing<br /></pre> <p>Great. So at least I know it's running what's in the dynamic block. The following segment is going to change that (fingers crossed).</p> <pre class="brush: scala; toolbar: false;">import hotcode.HotSwap._<br /><br />object Main extends Application {<br /> update('message) {<br /> val db = Adapter("org.sqlite.JDBC", "jdbc:sqlite:chats.db") <br /> val message = db.single("SELECT * FROM chats ORDER BY timestamp DESC LIMIT 1")(_.executeQuery())<br /><br /> println(message)<br /> }<br />}<br /></pre> <p>I run that, and chat a couple times more. Low and behold, my sbt output:</p> <pre>[info] Map(timestamp -> 1294867472648, sender -> schmee, msg -> Something special)<br />[info] Map(timestamp -> 1294867517142, sender -> schmee, msg -> trying it again)<br /></pre> <p>It works, but there are some notable problems. The context is all lost. It works great if I'm reading from a datastore, or changing values in the datastore. I can't really change the bot itself though. I have some ideas about how to address this, but I think the presentation is going to take a hit. At any rate, I was pleased to see it work this well.</p><img src="" height="1" width="1" alt=""/>Philip Hot Swapping (Pseudo Hot Swap)<p>I wrote some library code recently, triggered by a <a href="">post I read</a>. The gist of the author's post was to compare Scala and Erlang. I actually agree with him. I want to bring your attention to where I come into this, <a href="">Hot code swapping</a>.</p> <a name='more'></a> <p>I created a library capable of swapping out dynamic code, particularly run on a server. If I want to change behavior of some code without having to recompile the code and reboot the server, I should be able to do this.</p> <p>My first attempt at this was more of a proof of concept, but it <em>does</em> work. Now, it's more of a pseudo hot swap solution, involving remote actors in the internal Scala library. </p> <p>Here's some example of the client code in question.</p> <pre class="brush: scala; toolbar: false;">import com.philipcali.hotcode.HotSwap._<br /><br />def log(s: String) { <br /> // logs text to file appender <br />}<br /><br />for(i <- 1 to 10) {<br /> println("Iteration %d" format(i))<br /> dynamic('main) {<br /> log("Potentially dynamic code")<br /> }<br /><br /> // Sleep for a second so I can inject some code<br /> Thread.sleep(1000)<br />}<br /></pre> <p>The control construct <code>dynamic</code> is what we zero in on. The code inside that block is stored on a <code>RemoteActor</code>. Now, a few seconds later, we want to change it.</p> <pre class="brush: scala; toolbar: false;">import com.philipcali.hotcode.HotSwap._<br /><br />def log(s: String) { <br /> // logs text to file appender <br />}<br /><br />update('main) {<br /> log("Just been injected!")<br />}<br /></pre> <p>The call to <code>update</code> here, replaces what's currently running with whatever is in the update code block. Under the covers, nothing is lost. The <code>RemoteActor</code> is actually storing revisions of the dynamic code, which can be replaced at any given time.</p> <p>If we look at the resulting output in our logged file:</p> <pre class="brush: scala; toolbar: false;">Potentially dynamic code<br />Potentially dynamic code<br />Potentially dynamic code<br />Just been injected!<br />Just been injected!<br />Just been injected!<br />Just been injected!<br />Just been injected!<br />Just been injected!<br />Just been injected!<br /></pre> <p>So there you have it: pseudo hot swapping. The <em>dynamic</em> code here is hardly dynamic, because everything is compiled, but changing behavior to an application should be fairly painless. Now, to do what the author wanted in that comparison post (which was to simply add a log line), you don't have to shutdown the server!</p><img src="" height="1" width="1" alt=""/>Philip Time is Here!<p>And you know what that means: an attempt to write another book! I've written up some thoughts on my characters, locations, and important what not's on Google Docs in anticipation. </p> <a name='more'></a> <p>Last time I participated, I wrote my entire novel in vim, segregating my chapters into different files. There's really not much else you need than that... Or so I thought.</p> <p>I'm taking a rather different approach this year. I'm using the <a href="">nerd tree plugin</a> to keep better track of my chapters. The big change, however, is that I'm writing all my chapters in markdown. Through markdown, I'm provided with all the snazzy text formatting I need, but it's still plain text. I plan on writing a simple script to strip the markdown for the final paper submission. There's a good reason I'm writing the book in markdown...</p> <p>You see, there's an awesome e-book management tool called <a href="">calibre</a>. I got this program to manage e-books on my wife's iPad, and they also offer a way to add books that anyone written into the Books app through various formats. I found the plain text format needing much modification to get it to look right on the device. Calibre's html support, on the other hand, is incredible, but who wants to write all those nasty tags. This lazy programmer intends to convert his markdown to html. //End third person speech.</p> <p>I have an update on the cron DSL I've been working. Stay tuned on my adventure about building grammar rules (try to keep your eyes open)!</p> <p>-- Philip Cali</p><img src="" height="1" width="1" alt=""/>Philip Magic<p><strong>Updated</strong>: <em>Work is being done for a cron dsl work on my project <a href="">cronish</a>. Check it out... fork me... I'm open to suggestions.</em></p> <p>I have never taken a strong interest in creating DSL's before tinkering with Scala. I had an eye opening experience this weekend, that originated as a joke.</p> <p>I've recently taken an interest in the <a href="">Harry Potter</a> series. Other Potter fans may also know that the next one in the series is coming out next month. On my birthday, in fact.</p> <a name='more'></a> <p>I wanted to write a program that simply let me ask my computer when the movie was coming to theaters. Scala is perfect for this. The DSL was thrown together in a hurry, so I am not particularly fond of the underlying code, but the finished product was this:</p> <pre>scala> import com.philipcali.potter.dsl._<br />import com.philipcali.potter.dsl._<br /><br />scala> When is "Harry Potter and the Deathly Hallows: Part 1" coming out?<br />Harry Potter and the Deathly Hallows: Part 1 is coming out in 24 days!<br /><br />scala> When is "Harry Potter and the Deathly Hallows: Part 1" in theaters?<br />Harry Potter and the Deathly Hallows: Part 1 is coming out in 24 days!<br /></pre> <p><em>What exactly is the point</em>? I can ask my computer when a movie is coming out <em>casually</em>, and it will tell me... through Scala. The DSL is rather pointless for a Scala programmer. Using this simple DSL, a non programmer could <em>code</em> something, and that is the point I am getting to. The usefulness behind the DSL.</p> <p><strong>That's when it hit me!</strong></p> <p>A cron DSL. Imagine being able to do something like this:</p> <pre>scala> vallanguage parsing</a> to achieve this goal. <em>And a new project was conceived</em>.</p> <p>-- Philip Cali</p><img src="" height="1" width="1" alt=""/>Philip and ScalaTest<p>I've read a good bit about <a href="">ScalaTest</a> and <a href="">BDD</a> before, but I actually used it for the first time on a particular project I'm working on at the moment.</p> <p>The code in question was a bit of utility code that simply extracts a zip archive or archive's a directory (a basic packaging utility). Writing a spec for this was very easy.</p> <a name='more'></a> <ul><li>Given an archive, extracting it should produce the known directory structure.</li><li>The extracted contents should be the same as the known contents</li><li>Archiving should produce an archive of directory</li><li>Archive should be valid (ie: no errors on extracting)</li></ul> <p>That's four quick ones. Below, I have the spec written out as a ScalaTest. I couldn't help but to have a big, goofy smile when I ran it through sbt. </p> <pre class="brush: scala; toolbar: false;">package test<br /><br />import org.scalatest.{FlatSpec, BeforeAndAfterAll}<br />import org.scalatest.matchers.ShouldMatchers<br />import Zip._<br /><br />class ZipSpec extends FlatSpec with ShouldMatchers with BeforeAndAfterAll {<br /> import java.io.File<br /> val archivePath = getClass.getClassLoader.getResource("archive.zip")<br /><br /> override def afterAll(configMap: Map[String, Any]) {<br /> def recurse(file: File)(fun: File => Unit) {<br /> if(file.isDirectory) <br /> file.listFiles.filter(f => !f.getName.startsWith(".")).foreach {<br /> recurse(_)(fun)<br /> }<br /> fun(file)<br /> }<br /><br /> // Delete temp files<br /> recurse(new File("archive")) { _.delete }<br /> recurse(new File("temp")) { _.delete }<br /> new File("../archive.zip").delete<br /> new File("archive.zip").delete<br /> }<br /><br /> "Test archive" should "exists" in {<br /> val archive = new File(archivePath.getFile)<br /> archive should not be (null)<br /> } <br /><br /> "Extract" should """create directory tree: <br /> archive/ <br /> archive/child/ <br /> archive/child/more.txt <br /> archive/test.xml""" in {<br /> extract(archivePath.getFile)<br /><br /> // Checking Dir tree<br /> new File("archive") should be ('exists)<br /> new File("archive/test.xml") should be ('exists)<br /> new File("archive/child") should be ('exists)<br /> new File("archive/child/more.txt") should be ('exists)<br /> }<br /><br /> it should """create a new directory tree:<br /> temp/archive<br /> temp/archive/child/<br /> temp/archive/child/more.txt<br /> temp/archive/test.xml""" in {<br /> extract(archivePath.getFile, "temp")<br /><br /> new File("temp/archive") should be ('exists)<br /> new File("temp/archive/test.xml") should be ('exists)<br /> new File("temp/archive/child") should be ('exists)<br /> new File("temp/archive/child/more.txt") should be ('exists)<br /> }<br /><br /> "Extracted flat files" should "contain correct data" in {<br /> import scala.xml._<br /> import scala.io.Source.{fromFile => open}<br /><br /> val xmltext = <stuff><br /> <more-stuff>Test</more-stuff><br /></stuff><br /> valunit test</a> again.</p> <p>-- Philip</p><img src="" height="1" width="1" alt=""/>Philip / SBT plugin example app<p>I decided to try out my new plugin for my new work environment, and my nephew has been frequently inquiring to make him a <em>video game</em>, but I particularly enjoy my free afternoons. I thought a good way to test the plugin and appease the little guy, was to simply write a small game!</p> <a name='more'></a> <p>I'll be the first to admit that I'm not a game programmer. Far from it. It requires a different way of thinking , and to frank with everyone: I'm not very good at it! I would, however, be lying if I said I wasn't itching to make another game. It has been over a year since I used LWJGL and Slick to make <a href="">Scetris</a>, so I thought a quick <a href="">Tic-Tac-Toe</a> game was in order. (Don't shoot me for my baby steps!)</p> <p>Suffice to say, all three were accomplished relatively quickly.</p> <ol><li>The <a href="">sbt-lwjgl-plugin</a> works great! The speed I was able to have this project up and running was hardly comparable to that of my previous eclipse work environment.</li><li>This was a great exercise getting back in the <em>swing of things</em>, as they say. Slick makes making a simple game like this rather breezy.</li><li>I think my nephew will enjoy the little game.</li></ol> <p>Here's a screenshot. If the game looks like I spent a couple of hours on it, then <strong>mission accomplished</strong>.</p> <p><a href=""><img src="" alt="Screenshot" /></a></p> <p>-- Philip</p><img src="" height="1" width="1" alt=""/>Philip / SBT plugin<p>So I created a <a href="">LWJGL</a> / sbt plugin today, thanks to this rather simple-but-awesome <a href="">blog post</a> I read. I also added a little nugget for developers using <a href="">Slick</a>, which is a simple trait for slick dependencies. It is probably more silly than useful, but it was easy enough to add.</p> <p>This <em>project</em> spawned because my aggravation in having to develop games in sbt. Fortunate for me, someone had encountered a similar issue.</p> <p>The plugin, and more information about the plugin, can be found on github: <a href="">sbt-lwjgl-plugin</a>.</p> <p>-- Philip Cali</p><img src="" height="1" width="1" alt=""/>Philip Appengine Development<p>So I have a few apps on appengine. I have my beautiful wife to thank for that. I never planned on deploying anything other than a silly <a href="">blog</a> on it before she came in my life, coming up with awesome ideas. She would mention in passing, "I wish there was an app for x on my ipad". Chances are, there would be, but it would cost a dollar or two. I know that does not sound like much, but I always look for an opportunity to expand the brain a little. So I write her apps.</p><a name='more'></a><p>All my apps prefice with <strong>1337</strong>, which I coined the <em>leet suite</em>:</p> <ul><li><a href="">1337 Shop list</a>: Your standard shopping list app. You can share with people, and anyone with a Google account can use it.</li><li><a href="">1337 Flashcard</a>: Your standard flashcard app that uses html 5 transitions, and stuff.</li><li><a href="">1337 Todo list</a>: Your standard todo list with Google Calendar integration (which actually is not done yet).</li></ul> <p>They all support a mobile (specifically iPhone) version, and a desktop version. Through out all this, I have made several libraries to help roll these guys out, and I was using eclipse to build and deploy the builds. I will digress now.</p> <p>Eclipse is a monster. On my Macbook pro, eclispe alone consumes 1 gig of the 1.7 gigs I have available. This is the cool thing about Eclipse: it dies when there is no memory left to consume.</p> <p>So I was in this frustrating cycle of coding, watching that blasted beach ball, coding, crashing, launching eclipse, waiting, waiting, waiting, coding, watching the beach ball, crashing.</p> <p>There has got to be a better way, right!? There <em>is</em>! Over the past year, I have converted all the home grown scala projects to use sbt. </p> <p>The sbt/git eco-system, is very inviting. My biggest complaint about coding Scala in vim was the lack of auto (method/field) complete ability, and seeing compile errors as I typed.</p> <p>But I figured it was time to grow up. I combined terminal multiplexing (screen) with vim and sbt. I now code, compile (build), and deploy all from the command-line. This type of work environment has alleviated a lot of the memory over head to run eclipse, combined with the fact that I learned a few things about vim and screen.</p> <p>Now all I have to do to roll out a new appengine app with all my libraries, is:</p> <pre>git clone ssh://user@domain/var/git/appengine_scaffolding.git new_app<br />cd new_app<br />git rm -rf .git<br />sbt<br />[info] Building project appengine_scaffolding 1.0 against Scala 2.8.0<br />[info] using AppengineScaffoldingProject with sbt 0.7.4 and Scala 2.7.7<br />> update<br />> dev-appserver-start -p 8888<br /></pre> <p>Direct my browser to, and bam! That is all.</p> <p>Some immediate advantages:</p> <ul><li>Learning more about sbt. I love that build tool. You configure your build and dependencies all through Scala code.</li><li>Learning more about screen. I have used screen for years. I love screen. On a day to day basis, I could get by with few open terminals, create, remove, rename, etc. Screen is very complicated. It allows you to split your view horizontally or vertically. This feature is <strong>very</strong> nice with sbt. I can reload or compile in sbt, and have the source file in my view right above it.</li><li>Learning more about vim. Vim is something else I used for years. I learn something new about it every week, it seems.</li><li>The <a href="">appengine sbt plugin</a>. A must get if you plan on doing similiar sbt/scala appengine development.</li></ul> <p>I realize this post was more about advertising than anything. Sorry if it bored you to death.</p> <p>-- Philip</p><img src="" height="1" width="1" alt=""/>Philip!<p>So I ported all the content from my <a href="">old blog</a>, and started one here hosted by Google. I have been so impressed by Google, and all their online services. They are good to their customers, and really good to developers, what with all the libraries.</p><a name='more'></a> <ul><li><a href="">Gdata</a></li><li>Appengine</li><li>Online services (gmail, calendar, contacts, map, blogger, etc)</li></ul> <p>I converted all my previous posts from html to markdown. <a href="">Markdown</a> is a nice readable markup language in its default state, which is easily converted into other formats. While I trust Google with a lot of things, I always like to have physical backups of all my posts.</p> <p>So I store all my posts in my personal git repo, in multiple markdown files. Cool enough. There is a really nice webapp with javascript Markdown to html conversion, called <a href="">Showdown</a>. I could paste my markdown, convert it, then copy and paste the converted html into Blogger.</p> <p>But I am a developer, who is also very lazy. I wrote a small script that transforms a markdown file into html, and loads the converted text into my clipboard for pasting into Blogger.</p> <p>For those interested, the code for this script is below:</p> <pre class="brush: scala; toolbar: false;">package com.calico.blogger.BlogBot<br /><br />...<br />import com.petebevin.markdown.MarkdownProcessor<br />import scala.io.Source.{fromFile => open}<br />import grizzled.util.{withCloseable => withc}<br /><br />object BlogBot {<br /> def writeToClipboard(text: String) {<br /> val clipboard = Toolkit.getDefaultToolkit.getSystemClipboard<br /> val str = new StringSelection(text)<br /> clipboard.setContents(str, null)<br /> }<br /><br /> def main(args: Array[String]) {<br /> if(args.size < 1) {<br /> println("Please provide me a text file to convert from markdown to html")<br /> } <br /><br /> // Remove the <code></code> protions<br /> val""")<br /><br /> // Write to file<br /> println("Writing conversion to file: out.html ....")<br /> withc(new FileWriter("out.html")) { w =><br /> w.write(ret)<br /> } <br /><br /> // Write to clipboard<br /> println("Writing conversion to your clipboard ....")<br /> try {<br /> writeToClipboard(ret)<br /> println("Done.")<br /> } catch {<br /> case e: Exception => println("Failed ... " + e.getMessage)<br /> } <br /> }<br />}<br /></pre> <p>So the process is made quite simple with scala, sbt, vim, and screen. I like working in pure command-line these days.</p> <p>This is a post to inform people that I am still alive, and doing development on the side.</p> <p>-- Philip</p><img src="" height="1" width="1" alt=""/>Philip with Actors Part II<p>This is where we'll be getting to the good part. For those of you are just now joining, the part one of the this series can be found <a href="2009/06/quotes-with-actors-part-i.html">here</a>. Today I'm going to be talking about how to make those communicating actors cross JVM, cross machine, cross platform. By changing a our code base a bit, and make use of the RemoteActors, we can achieve our goal.</p><a name='more'></a> <p>So let me give you a scenario. Let's say Steve has an extensive collection of video games quotes that he thought were generally entertaining, and his buddy Bill has an extensive collection B-movie Sci-Fi quotes he thought were entertaining. Steve read a few of Bill's random quotes; low and behold, even he was entertained! But here's the problem: Steve doesn't want to pollute his quote collection with Bill's quotes, but he want people to know about Bill's quotes, when people connect to his quote server.</p> <p>In the last post we had a quote generator that would start an actor and loop away. Since we've established a client/server terminology, let's take that code and make it accessible over the wire.</p> <pre class="brush: scala; toolbar: false;">import scala.actors.Actor._<br />import scala.actors.RemoteActor._<br />import java.util.Random<br /><br />class QuoteServer(val port: Int, val name: Symbol) {<br /> lazy val quotes = ... // Load our quotes from the "store"<br /><br /> val generator = actor {<br /> alive(port)<br /> register(name, self)<br /> val rnd = new Random()<br /> loop {<br /> react {<br /> case Request() => { <br /> val (who, said, from) = quotes(rnd.nextInt(quotes.size))<br /> sender ! Quote(from, who, said)<br /> } <br /> } <br /> } <br /> }<br />}<br /></pre> <p><strong>alive</strong> and <strong>register</strong> are methods on the object RemoteActor. One tells the actor to stay alive on a particular port, and the other registers the actor (in this case, self). That's basically it! Now we just need some code that kicks off that process to try it out.</p> <pre class="brush: scala; toolbar: false;">scala> import calico.quotes.server.QuoteServer<br />import calico.quotes.server.QuoteServer<br /><br />scala> import calico.quotes.client.Request<br />import calico.quotes.client.Request<br /><br />scala> val server = new QuoteServer(9090, 'local)<br />server: calico.quotes.server.QuoteServer = calico.quotes.server.QuoteServer@1742c56<br /><br />scala> server.generator !? Request()<br />res0: Any = Quote(Suikoden II,Luca Blight,"I've chopped hundreds,<br /> thousands of necks. I can do it with my eyes closed!")<br /></pre> <p>Now I haven't demonstrated the remoteness yet, but it's all set up to work properly. The attentive would have noticed that the Request case class was moved to a client package. Some client some make a request. Remember, we are trying to make request over the wire. Sounds like it could be difficult right?</p> <pre class="brush: scala; toolbar: false;">import scala.actors.remote.{Node, RemoteActor}<br /><br />case class Request()<br />case class Quote(from: String, who: String, what: String)<br /><br />class QuoteC(ip: String, port: Int, val name: Symbol) {<br /> private val remote = RemoteActor.select(Node(ip, port), name)<br /><br /> def quote = { <br /> remote !? (1000, Request()) match {<br /> case Some(Quote(from, who, what)) => what + "\n" + " - " + who + " from " + from<br /> case _ => "No quotes! Sorry... Is the server up?"<br /> } <br /> }<br />}<br /></pre> <p>We changed a couple of things from out last implementation, most notably, the use of the RemoteActor. The !? operator is overloaded with ability to set a timeout. Since we're dealing with remote activity, we don't want a request attempt to block the process. Instead, we just give the user a nice generic error message, for simplicity's sake.</p> <p>Without show some implementation code of this simple library, we can do something like this:</p> <pre class="brush: scala; toolbar: false;">> java -jar quote_server.jar<br />Please enter a port and server name<br /><br />> java - jar quote_server.jar 9073 video_games<br />Starting server 'video_games on port 9073...<br /></pre> <p>Let's try to connect.</p> <pre class="brush: scala; toolbar: false;">> java -jar quote_client.jar<br />Welcome to the quote REPL<br />> /connect 127.0.0.1 9073 video_games<br />Connecting to 'video_games server<br />> /quote<br />"I've chopped hundreds, thousands of necks. I can do <br /> it with my eyes closed!"<br />- Luca Blight from Suikoden II<br />> /exit<br />Goodbye!<br /></pre> <p>I went ahead and threw the client implementation in a loop for kicks, but a quote server can act as a client to other quote servers with a small change.</p> <pre class="brush: scala; toolbar: false;">...<br />case Request() => { <br /> val (who, said, from) = quotes(rnd.nextInt(quotes.size))<br /> sender ! Quote(from, who, said)<br />} <br />case ServerList(ls) => sender ! ServerList(QuoteC.servers)<br />...<br /></pre> <p>Now we have a way to return a server list from this server, so it's possible for a client to "hop" around from server to server, or a server to create categories with server lists, etc. The point is: a distributed quote server system can appear localized from a single client connection. This is exactly what we wanted from our example implementation.</p> <p>Some things to note:</p> <ul><li>We could easily make and online interface by using the lift web framework.</li><li>Concurrency is achieved by one quote server being able to handle several concurrent request.</li><li>Going remote is easy!</li><li>There are several implementations one can make with our little library.</li></ul> <p>This was an example of how one make use of the RemoteActor's in their code. It may not be the best example, but it's a good place to start in understanding how it works.</p> <p>-- Philip</p><img src="" height="1" width="1" alt=""/>Philip with Actors Part I<p>Several months ago, I got serious with the Scala Actor library, and quickly fell in love. Very little data is required to thread together your application with actors, and make you feel like a rawk star ;) This is part one of a two part series covering some assets of the scala actor library. First, we'll be looking at how to incorporate actors in code, then how to make application communication with RemoteActors. </p> <a name='more'></a> <p>The application we're going to make is a quote generator. Clients would send a request for a random quote, and the application will send a quote in the form of a string, formatted like so: "<strong>What was said</strong>" - <strong>Who</strong> said it from <strong>Where</strong>. With those requirements, we can come up with a couple of case classes for this simple application.</p> <pre class="brush: scala; toolbar: false;">case class Request()<br />case class Quote(from: String, who: String, what: String)<br /></pre> <p>What are case classes? In Scala, a case class definition is shorthand for this:</p> <pre class="brush: scala; toolbar: false;">object Quote {<br /> def apply(from: String, who: String, what: String) = new Quote(from, who, what)<br /> def unapply(q: Quote): Option[(String, String, String)] = Some(q.from, q.who, q.what)<br />}<br />class Quote (val from: String, val who: String, val what)<br /></pre> <p>Case classes are used for pattern matching, which the actor library makes heavy use of. The unapply is required for extracting, an incredible convenience for pattern matching. You can see that appending case in front of your class saves yourself a lot of typing.</p> <p>If we go back to our requirement, it said: Clients would send a <strong>request</strong> for a random <strong>quote</strong>. A verb and a noun. Similar to: a browser makes a <strong>GET</strong> and displays the <strong>response</strong>. So the next step is to make a quote "server" with a running actor.</p> <pre class="brush: scala; toolbar: false;">import java.util.Random<br />import scala.io.Source.{fromFile => open}<br />import scala.actors.Actor._<br /> /></pre> <p>Alright let's break this up. The first thing I'm doing is getting a list of tuples from a text file that contains our quotes. We'll call that our "store" of quotes in val quotes. I imported everything from object Actor, but we're only concerned about <strong>actor</strong>, <strong>loop</strong>, <strong>react</strong>, and <strong>sender</strong>.</p> <ul><li>The <strong>actor</strong> method takes in a body of code, instantiates an Actor class, and starts our actor. It's not enough that our sever starts. The server is required to run as long as we need it.</li><li>That is where loop comes in. <strong>loop</strong> takes in a function, calls that function and repeats itself. It is a never ending loop, which is exactly what we want.</li><li><strong>react</strong> takes in a function and applies it to any message the actor receives in its mailbox. This is where pattern matching is key. Our server is only concerned about quote requests. In the event that our server has received a request, it grabs a random quote from the store.</li><li>Think of <strong>sender</strong> being the from address on the mail. Our server says "send this quote to the sender of this request." That is done by the ! operator and the Quote case class to the right of it.</li></ul> <p>Now all we need is a client to make that request. The client code would look something like so:</p> <pre class="brush: scala; toolbar: false;">generator !? Request() match {<br /> case Quote(from, who, what) => what + "\n" + " - " + who + " from " + from<br /> case _ => "No quotes! Sorry :("<br />}<br /></pre> <p>The !? operator is like a send/receive method. It sends the generator a request and receive a Quote. In the event that it's not a quote (which it should always be), return an "error" string. All we do now is some formal wrapper of the code above, and we got a random quote generator.</p> <pre class="brush: scala; toolbar: false;">package calico.quotes<br /><br />import scala.io.Source.{fromFile => open}<br />import scala.actors.Actor._<br />import java.util.Random<br /><br />case class Request()<br />case class Quote(from: String, who: String, what: String)<br /><br />object QMachine { /> def apply() = new QMachine(generator)<br />}<br /><br />class QMachine(generator: scala.actors.Actor) {<br /> def quote = { <br /> generator !? Request() match {<br /> case Quote(from, who, what) => what + "\n" + " - " + who + " from " + from<br /> case _ => "No quotes! Sorry :(" <br /> } <br /> }<br />}<br /></pre> <p>Now we run this through sbt (<a href="">Simple Build Tool</a>; I highly recommend that you build all Scala projects with it), we can fire up the console and test it out.</p> <pre class="brush: scala; toolbar: false;">scala> import calico.quotes.QMachine<br />import calico.quotes.QMachine<br /><br />scala> val qm = QMachine()<br />qm: calico.quotes.QMachine = calico.quotes.QMachine@952527<br /><br />scala> qm.quote<br />res0: java.lang.String = <br />"Let me tell you something. There are weak men and strong men <br />in this world. The strong men take everything and the weak men die. <br />That's how the world was designed. Now I will show you how it works, weaklings!!!"<br /> - Luca Blight from Suikoden II<br /><br />scala> qm.quote<br />res1: java.lang.String = <br />"I've chopped hundreds, thousands of necks. I can do it with my eyes closed!"<br /> - Luca Blight from Suikoden II<br /><br />scala><br /></pre> <p>Ah, nothing like uplifting quotes from RPG's most ruthless villain, Luca Blight. Some things to notice about our quote generator implementation through actors are: </p> <ul><li>The "client" knows about our "server", but our server could care less about the client. This separation is great, and largely popularised by the MVC paradigm.</li><li>A few lines of code, and we can make our quote server send quotes over the wire with RemoteActors.</li></ul> <p>And this is about it. Stay tuned for part two, when we make this "remote."</p> <p>-- Philip </p><img src="" height="1" width="1" alt=""/>Philip goofiness<p>It's Monday afternoon, and it's time for a silly post. Have you ever been browsing the Internet, looking at something completely code irrelevant, yet something struck you: "I bet I could do this in X language"? I saw a tshirt that said "factorial!", when something like that hit me.</p><a name='more'></a><p>Using some of the power of Scala, I'm going to make a factorial look like it does in math books. Meaning, when you see the phrase 4!, it should evaluate to 24. Well, let's just get a factorial function to work first.</p> <pre class="brush: scala; toolbar: false;">scala> def fac(n: Int) : Int = if(n == 0) 1 else n * fac(n -1)<br />fac: (Int)Int<br /><br />scala> fac(4)<br />res0: Int = 24<br /></pre> <p>There's our logic! I want to use a bang operator though. I want it to be a "true" factorial resemblance. If I want a bang operator, I'll need to have my own class.</p> <pre class="brush: scala; toolbar: false;">scala> class Fac(val i: Int) {<br /> | def fac(n: Int): Int = if(n == 0) 1 else n * fac(n - 1)<br /> | def ! = fac(i)<br /> | }<br />defined class Fac<br /><br />scala> val test = new Fac(4)<br />test: Fac = Fac@1d9d55b<br /><br />scala> test!<br />res1: Int = 24<br /></pre> <p>Well, now. That's starting to <strong>look</strong> right, but there's still this thing: new Fac. I want to get rid of that as well. I mean, I just want to type 4!, and it will result in 24. This can be done through implicit conversions.</p> <pre class="brush: scala; toolbar: false;">scala> implicit def int2Fac(n: Int) = new Fac(n)<br />int2Fac: (Int)Fac<br /><br />scala> 4!<br />res2: Int = 24<br /></pre> <p>And that's it! A pretty stupid example, but this example shows us several things:</p> <ol><li>An example showing off operator overloading.</li><li>An example showing optional periods and parenthesis in method calls.</li><li>The reason for implicit conversions, and a perfect use case.</li></ol> <p>A fun example for a Monday.</p> <p>-- Philip</p><img src="" height="1" width="1" alt=""/>Philip the JDBC<p>The title of this post may raise some brows, but I mean what I say. You see, I do a lot of little batch scripting with sqlite databases, and since Scala is my scripting language of choice (as of this writing), I interact with the JDBC API quite a bit. In my situation, it would be silly to import all the 3rd party libraries that make it simple (the big ones being Spring JDBC and Hibernate). Allow me to show you what I mean...</p><a name='more'></a><p>For those of you who haven't ever interacted with JDBC, or forgotten completely, allow me to enlighten you. The following was taken from <a href="">sqlite JDBC homepage</a>:</p> <pre class="brush: scala; toolbar: false;">import java.sql.*;<br /><br />public class Test {<br /> public static void main(String[] args) throws Exception {<br /> Class.forName("org.sqlite.JDBC");<br /> Connection conn = DriverManager.getConnection("jdbc:sqlite:test.db");<br /> Statement stat = conn.createStatement();<br /> stat.executeUpdate("drop table if exists people;");<br /> stat.executeUpdate("create table people (name, occupation);");<br /> PreparedStatement prep = conn.prepareStatement(<br /> "insert into people values (?, ?);");<br /><br /> prep.setString(1, "Gandhi");<br /> prep.setString(2, "politics");<br /> prep.addBatch();<br /> prep.setString(1, "Turing");<br /> prep.setString(2, "computers");<br /> prep.addBatch();<br /> prep.setString(1, "Wittgenstein");<br /> prep.setString(2, "smartypants");<br /> prep.addBatch();<br /><br /> conn.setAutoCommit(false);<br /> prep.executeBatch();<br /> conn.setAutoCommit(true);<br /><br /> ResultSet rs = stat.executeQuery("select * from people;");<br /> while (rs.next()) {<br /> System.out.println("name = " + rs.getString("name"));<br /> System.out.println("job = " + rs.getString("occupation"));<br /> }<br /> rs.close();<br /> conn.close();<br /> }<br />}<br /></pre> <p>They do a little bit of everything here. Create a table, drop tables, inserts, and querying. Now let me show the the same program in question rewritten in Scala using my DB API. The actual code behind the API will come last.</p> <pre class="brush: scala; toolbar: false;">import calicodb.Adapter<br /><br />object Main extends Application {<br /> val sqlitedb = Adapter("org.sqlite.JDBC", "jdbc:sqlite:test.db")<br /> sqlitedb.exec("drop table if exists people;")(_.execute())<br /> sqlitedb.exec("create table people (name, occupation);")(_.execute())<br /><br /> // Get our people ready<br /> val people = List(("Gandhi", "politcs"), ("Turing", "computers"), ("Wittgenstein", "smartypants"))<br /><br /> sqlitedb.exec("insert into people values (?, ?);") { prep =><br /> for(p <- people; val (name, occupation) = p) {<br /> prep.setString(1, name)<br /> prep.setString(2, occupation)<br /> prep.addBatch()<br /> }<br /> prep.executeBatch()<br /> }<br /><br /> val rtn = sqlite.query("select * from people;")(_.executeQuery())<br /> println(rtn)<br /> // List[Map[String, Any]](Map[String, Any](name -> Gandhi, occupation -> politics) , ...<br />}<br /></pre> <p>In my version, you should notice a few things different. I'm providing some space between us, because I want you to try to find as many of them as you can before I tell you. There's one in particular, that should catch your eye ;)</p> <p>Okay. Ready? Let's start from the top. The big one: there's no close() statments anywhere! This is surely a recipe for disaster. Don't worry, it's not as bad as you think. You'll see in a second.</p> <p>Other than that, I'm using an Adpater to interact with the sqlite database, but it seems like I'm messing around with a PreparedStatment directly. Because I am! I guess now is as good a time as any to look at the underlying code.</p> <pre class="brush: scala; toolbar: false;">/** <br />* For an update, insert, create, delete.<br />*/ <br />def exec(execute: String)(fun: PreparedStatement => Any) = { <br /> val conn = getConnection<br /> try {<br /> val stm = conn.prepareStatement(execute)<br /> conn.setAutoCommit(false)<br /> fun(stm)<br /> conn.commit()<br /> stm.close()<br /> } catch {<br /> case _ => conn.rollback()<br /> } finally {<br /> conn.close()<br /> }<br />}<br /></pre> <p>From my last post, you should be able to tell that I'm a big fan of general functions being passed around, and allowing the user to say what's up. Now it's clearer that my second parameter to this method is a function that takes a PreparedStatement, and does some work. Closure support allowed me to do batch commits based on a List of data. Fair enough. Good enough for me. Now there's this thing of queries returning a List of Map's. Must be some kind of magic. Let's take a look at that one now.</p> <pre class="brush: scala; toolbar: false;">def query(q: String) (fun: PreparedStatement => ResultSet): List[Map[String, Any]] = {<br /> val conn = getConnection<br /><br /> try {<br /> val stm = conn.prepareStatement(q)<br /> val results: List[Map[String, Any]] = fun(stm)<br /> stm.close()<br /> results<br /> } finally {<br /> conn.close()<br /> }<br />}<br /></pre> <p>Ah, same as before... except you're assiging a ResultSet to a List[Map[String, Any]] !? That compiles!? Yes. I really left that in there to show off another feature of Scala: implicit conversions. There's another function that does the bulk of the conversion work.</p> <pre class="brush: scala; toolbar: false;">/** Convert an ugly jdbc result set to a collection that can be used in code*/<br />implicit def rs2friendly(rs: ResultSet): List[Map[String, Any]] = {<br /> val meta = rs.getMetaData<br /> val names = for(i <- 1 to meta.getColumnCount) yield (meta.getColumnLabel(i))<br /><br /> def buildr(names: List[String], rs: ResultSet): List[Map[String, Any]] = {<br /> rs.next match {<br /> case true => {<br /> val tuples = for(name <- names) yield (name, rs.getObject(name))<br /> val map = Map() ++ tuples<br /> map :: buildr(names, rs)<br /> }<br /> case _ => Nil<br /> }<br /> }<br /><br /> val rtn = buildr(names.toList, rs)<br /> // Safe: close the result set since we're through<br /> rs.close()<br /> rtn<br />}<br /></pre> <p>And there you have it. The Adapter does some nice things. It does all of it's nice things without relying on 3rd party warez. And it's low level and general enough to use for a lot of JDBC drivers out there. Heck making this script compatible with MySQL is easy enough.</p> <pre class="brush: scala; toolbar: false;">// MySQL<br />val mysqldb = Adapter("com.mysql.jdbc.Driver", "jdbc:mysql://host:port/db")<br /></pre> <p>That's all I have for today. I hope it was interesting enough :P</p> <p>-- Philip</p><img src="" height="1" width="1" alt=""/>Philip utilities and the beauty of closures<p>Arg! I made the same mistake again. You know: the one where you made a bad svn commit, and there are those pesky .svn folders everywhere. Well, it happened to me twice in the same night, and I'm too lazy to delete them by hand. I decided a simple Scala script would do the trick.</p><a name='more'></a><br /><p>A simple little recursive function would do the trick. I fired up the scala interpreter, and whipped out the following function in a minute or two:</p><pre class="brush: scala; toolbar: false; gutter: false;">scala> def recurse(folder: java.io.File) {<br /> | if(folder.isDirectory) {<br /> | folder.listFiles.foreach{recurse(_)}<br /> | } <br /> | if (folder.getName.contain(".svn")) folder.delete<br /> |} <br />recurse: (java.io.File)Unit<br /><br />scala> recurse(new File("."))<br /></pre><p>Bam! That did the trick! I started to think about the function more (even though it was already stupid simple). There's a way to make this recursive function more general. Say I wanted a function that allows me to traverse a directory and <strong>do</strong> whatever I want, instead of traverse a directory and only delete .svn directories. A small change to the recursive function would allow that:</p><pre class="brush: scala; toolbar: false;">def recurse(folder: File) (action: File => Unit) {<br /> if(folder.isDirectory) {<br /> folder.listFiles.foreach(f => recurse(f)(action))<br /> } <br /> action(folder)<br />}<br /></pre><p>The keen Scalafied eye would quickly see what I did there. For the rest of us, I'm going to begin explaining now:</p><p>The function recurse now takes 2 parameters: a java.io.File, and a function that takes in a java.io.File and returns Unit (which in the Scala world is equivalent to Java's void). The function parameter is named action. For each file, and directory, action will be called on it. So, now I can call it like so:</p><pre class="brush: scala; toolbar: false;">recurse(new File(".")) { f=> <br /> if (f.getName.contains(".svn")) f.delete<br />}<br /></pre><p>That's just doing the same as before, but now, let's say I wanted to find every modified .scala from now to a week ago:</p><pre class="brush: scala; toolbar: false;">// Get the date from a week ago<br />val cal = Calendar.getInstance<br />cal.add(Calendar.DATE, -7)<br />recurse(new File(".")) { f=><br /> if (f.getName.endsWith(".scala") && f.lastModified >= cal.getTime) println(f.getName)<br />}<br /></pre><p>Even better still: with Scala's closure support and partial functions, you can do some nifty actions with our very basic utility function. I want to show you the latter first:</p><pre class="brush: scala; toolbar: false;">// This is partial function syntax<br />// Regardless of the parent directory, I want to delete svn's<br />val deleteSvnIn = recurse(_: File) { f=><br /> if (f.getName.contains(".svn")) f.delete<br />}<br /><br />// Calling the partial looks very natural<br />deleteSvnIn(new File("."))<br />deleteSvnIn(new File("/some/very/bad/dir"))<br /><br />// Or vice versa<br />val inCurrentDir(new File(".")) _<br /><br />inCurrentDir { f =><br /> if(f.getName.endsWith(".scala")) println(f.getAbsolutePath)<br />}<br /></pre><p>What I've shown you thus far is nothing spectacular. What I mean by that, is simply the same could be accomplished with any language that allows anonymous functions. Now, I'll show you closures. Allow me to give you a scenario first, so what I say makes sense: You were giving this library to work with, except you actually need the recurse function to <strong>return</strong> a value not just arbitrarily <strong>do</strong> something. You need to persist some data whether it be a count, a list, whatever. Given that you can't change<br />the horribly coded utility function, your options are:</p><ul><li>Write another function that does what you need.</li><li>Rely on Scala's closure support for your help, rewrite the function if time permits, and submit a patch :)</li></ul><p>Let's rely on closures for the time being.</p><pre class="brush: scala; toolbar: false;">// I want to count all the svn directories<br />var count = 0<br />recurse(new File(".")) { f=><br /> if (f.getName.contains(".svn")) count += 1<br />}<br /><br />println(count)<br /><br />// I want to store all scala files<br />var ls = List[File]()<br />recurse(new File(".")) { f=><br /> if (f.getName.endsWith(".scala")) ls += f<br />}<br />println(ls)<br /></pre><p>And there you have it! Even though we're solving problems outside the scope of this function's original goal, you're able to bend the rules with the power of closures. The functional programmer would have noticed the use of var's and scowled. I merely wanted to prove a point, not start a flame war.</p><p>-- Philip</p><img src="" height="1" width="1" alt=""/>Philip
http://feeds.feedburner.com/PhilcaliCodeV01
CC-MAIN-2017-43
refinedweb
15,387
66.23
The HR team at Initrode were a happy bunch, casting their nets into the perpetual stream of eager undergrads from nearby WTF U. It was a summer tradition at Initrode to invite a school of juniors to get a taste of their future by spending the long, sun-drenched afternoons of their dwindling youth hunched in cubicles. Chris was on the Dev Tools Team at Initrode, building widgets and gizmos to help his fellow developers be more productive. Since few of his colleagues were willing to unleash students on production code, the duds among the summer-student pool tended to end up on Chris's team. And that's why the intern at the center of this SOD bears the pseudonym Dudley. The Dev Tools summer project was to replace their build scripts, a kludge of bash and Perl, with an orderly Python script. Rewriting these tools in Python would let Chris put a nice web front-end on top so his fellow developers could run the tools from the Initrode intranet. He was therefore eager to see the results of Dudley's work on the bit of the code-analysis suite responsible for measuring cyclomatic complexity. Unfortunately, when asked to implement this in Python: find $1 -name '*.[c]' | xargs pmccabe | awk '{ print $1, $4, $5, $6, $7}' Dudley did exactly that: from subprocess import Popen, check_call, CalledProcessError DEVNULL = open(os.devnull, 'w') def check_for_error(stderr): if stderr: sys.stderr.write(stderr) sys.exit(1) p = Popen(['find', directory, '-name', '*.[c]'], stdout=subprocess.PIPE, stderr=DEVNULL) stdout, stderr = p.communicate() check_for_error(stderr) p = Popen(['xargs', 'pmccabe'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=DEVNULL) stdout, stderr = p.communicate(stdout) check_for_error(stderr) p = Popen(['awk', ('$1 > %d' % max_limit)], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=DEVNULL) stdout, stderr = p.communicate(stdout) check_for_error(stderr) p = Popen(['awk', ('{ print $1, $4, $5, $6, $7}'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=DEVNULL) results, stderr = p.communicate(stdout) check_for_error(stderr) Chris had hoped Dudley would use Python's extensive function library to provide the inputs to pmccabe and parse its output in a more compact, expressive way than they'd done on the command line. But he had to admit Dudley had earned an E for effort.
http://thedailywtf.com/articles/Literal-Scripting
CC-MAIN-2017-26
refinedweb
373
55.84
Our Agenda - Introduction to the topic. - Downloading and installing Xdebug on your local machine (Mac OS X 10.6.6+, MAMP 2.1.1). - Integrating with PhpStorm. - Practice debugging. What You Will Need - A Mac running Mac OS X 10.6.6+. - Apple Xcode 4.6 (free on the Mac App Store). - Command Line Tools. - Homebrew. - A terminal app of your choice. - PhpStorm 5+ (many other IDE's will work as well). What Is Xdebug? Well, technically, Xdebug is an extension for PHP to make your life easier while debugging your code. Right now, you may be used to debugging your code with various other simple solutions. These include using echo statements at different states within your program to find out if your application passes a condition or to get the value of a certain variable. Furthermore, you might often use functions like var_dump, print_r or others to inspect objects and arrays. What I often come across are little helper functions, like this one for instance: function dump($value) { echo ‘<pre>'; var_dump($value); echo ‘</pre>'; } The truth is, I used to do this too, for a very long time actually. The truth is, I used to do this too, for a very long time actually. So what's wrong with it? Technically, there is nothing wrong with it. It works and does what it should do. But just imagine for a moment, as your applications evolve, you might get into the habit of sprinkling your code all over with little echos, var_dumps and custom debuggers. Now granted, this isn't obstructive during your testing workflow, but what if you forget to clean out some of that debug code before it goes to production? This can cause some pretty scary issues, as those tiny debuggers may even find their way into version control and stay there for a long time. The next question is: how do you debug in production? Again, imagine you're surfing one of your favorite web-services and suddenly you get a big array dump of debug information presented to you on screen. Now of course it may disappear after the next browser refresh, but it's not a very good experience for the user of the website.. Configuring MAMP I don't want to go too deep into the downloading and installation process of MAMP on a Mac. Instead, I'll just share with you that I'm using PHP 5.4.4 and the standard Apache Port (80) throughout this read. Your First Decision A quick note before we start with building our own Xdebug via Homebrew: If you want to take the easiest route, MAMP already comes with Xdebug 2.2.0. To enable it, open: /Applications/MAMP/bin/php/php5.4.4/conf/php.ini with a text editor of your choice, go to the very bottom and uncomment the very last line by removing the ;. The last two lines of the file should read like this: [xdebug] zend_extension="/Applications/MAMP/bin/php/php5.4.4/lib/php/extensions/ no-debug-non-zts-20100525/xdebug.so" Now if you're asking yourself: “Why would I want to choose a harder way than this one?” And my answer to that is, it is never a mistake to look beyond your rim and learn something new. Especially as a developer these days, throwing an eye on server related stuff will always come in handy at some point in time. Promised. Install Xcode and Command Line Tools You can get Apple Xcode for free off of the Mac App Store. Once you've downloaded it, please go to the application preferences, hit the "Downloads" tab and install the "Command Line Tools" from the list. Install Homebrew Homebrew is a neat little package manager for Mac OS X which gets you all the stuff Apple left out. To install Homebrew, just paste the following command into your terminal. ruby -e "$(curl -fsSkL raw.github.com/mxcl/homebrew/go)" On a Mac, Homebrew will be the most convenient way to install Xdebug. On Linux however, compiling it yourself is the best way to go; which is not that easy on a Mac. Tip: Windows users just need to download the *.dll file from Xdebug.org, put it into the XAMPP folder and add the path to their php.ini file. As a PHP developer, you should from now on be aware of Jose Gonzalez's "homebrew-php" Github repo, which holds a lot of useful "brews" for you. If you've ever asked yourself how to install PHP 5.4 manually, you are right there. Now if you get into any trouble while installing Homebrew, check out Jose's Readme. To complete our Homebrew excursion, we want to "tap" into Jose's brewing formulae by executing the following commands within your terminal application: brew tap homebrew/dupes This will get us some dependencies we need for Jose's formulae. brew tap josegonzalez/homebrew-php Done! Now we should be ready to install Xdebug the comfy way, on a Mac. Install Xdebug Back in your terminal application, please execute: brew install php54-xdebug If you are on PHP 5.3, just replace the "4" with a "3" ;) The installation will take some time. After it's done, you'll see a little beer icon and some further instructions which you can ignore. So what just happened? Homebrew downloaded all the files including their dependencies and built them for you. As I've already told you, compiling yourself on a Mac can be a hassle. At the end, we got a freshly compiled xdebug.so located at /usr/local/Cellar/php54-xdebug/2.2.1/. Attention: Please note that Homebrew will install PHP 5.4 to your system during the process. This should not influence anything as it is not enabled on your system. To finally install Xdebug, we just need to follow a few more steps. Change directory ( cd) to MAMP's extensions folder: cd /Applications/MAMP/bin/php/php5.4.4/lib/php/extensions/no-debug-non-zts-20100525 You can re-check the path by looking at the last line of /Applications/MAMP/bin/php/php5.4.4/conf/php.ini, as this is where we are going. Backup the existing xdebug.so just in case: mv xdebug.so xdebug.so.bak Then copy your Homebrew Xdebug build: cp /usr/local/Cellar/php54-xdebug/2.2.1/xdebug.so /Applications/MAMP/bin/php/php5.4.4/lib/php/extensions/no-debug-non-zts-20100525/ If you want to force a copy ( cp) command to overwrite existing files, just do cp -X source target. Last, but not least, we need to modify the php.ini file to load the Xdebug extension file. Open /Applications/MAMP/bin/php/php5.4.4/conf/php.ini with a text editor of your choice, go to the very bottom and uncomment the last line by removing the semicolon at the front. Don't close the file just yet. Now relaunch MAMP, go to. If everything went well, you should find this within the output: If it did not work, please make sure that you really copied over the xdebug.so and have the right path in your php.ini file. Start Debugging Before we can actually start debugging, we need to enable Xdebug. Therefore, I hope you didn't close out your php.ini, as we need to add this line to the very end, after the zend_extension option: xdebug.remote_enable = On Save and close your php.ini file and restart MAMP. Go to again and search for xdebug.remote on the site. Your values should look exactly like mine: If they do not, follow the same procedure you used to add remote_enable = On for the other statements at the end of your php.ini file. Now, open your IDE of choice. You can use Xdebug with a number of popular software solutions like Eclipse, Netbeans, PhpStorm and also Sublime Text. Like I said before, I am going to use PhpStorm EAP 6 for this demo. Inside of PhpStorm, open the application preferences and find your way to "PHP \ Debug \ DBGp Proxy" on the left hand side, like in the screenshot below: Now choose your personal IDE key. This can be any alphanumeric string you want. I prefer to just call it PHPSTORM, but XDEBUG_IDE or myname would be perfectly fine too. It is important to set the "Port" value to 9000 as our standard Xdebug configuration uses this port to connect to the IDE. Tip: If you need to adjust this, add xdebug.remote_port = portnumber to your php.ini file. Attention: Other components may change this value inside of PhpStorm, so watch out for it if something fails. Next, click that red little phone button with a tiny bug next to it on the top toolbar. It should turn green. This makes PhpStorm listen for any incoming Xdebug connections. Now we need to create something to debug. Create a new PHP file, call it whatever you'd like and paste in the following code: <?php // Declare data file name $dataFile = 'data.json'; // Load our data $data = loadData($dataFile); // Could we load the data? if (!$data) { die('Could not load data'); } if (!isset($data['hitCount'])) { $data['hitCount'] = 1; } else { $data['hitCount'] += 1; } $result = saveData($data, $dataFile); echo ($result) ? 'Success' : 'Error'; function loadData($file) { // Does the file exist? if (!file_exists($file)) { // Well, just create it now // Save an empty array encoded to JSON in it file_put_contents($file, json_encode(array())); } // Get JSON data $jsonData = file_get_contents($file); $phpData = json_decode($jsonData); return ($phpData) ? $phpData : false; } function saveData($array, $file) { $jsonData = json_encode($array); $bytes = file_put_contents($file, $jsonData); return ($bytes != 0) ? true : false; } Now this code is falsy by default, but we will fix it in a moment, in the next section. Make sure everything is saved and open up your browser to the script we just created. I will use Google Chrome for this demo, but any browser will do. Now let's take a moment to understand how the debugging process is initialized. Our current status is: Xdebug enabled as Zend extension, listening on port 9000 for a cookie to appear during a request. This cookie will carry an IDE key which should be the same as the one we set-up inside of our IDE. As Xdebug sees the cookie carrying the request, it will try to connect to a proxy, our IDE. So how do we get that cookie in place? PHP's setcookie? No. Although there are multiple ways, even some to get this working without a cookie, we will use a little browser extension as a helper. Install the "Xdebug helper"" to your Google Chrome browser or search for any extension that will do it for the browser you are using. Once you've installed the extension, right click the little bug appearing in your address bar and go to the options. Configure the value for the IDE key to match the key you chose in your IDE, like so: After configuring it, click the bug and select "Debug" from the list. The bug should turn green: Now, go back to PhpStorm or your IDE of choice and set a "breakpoint". Breakpoints are like markers on a line which tell the debugger to halt the execution of the script at that breakpoint. In PhpStorm, you can simply add breakpoints by clicking the space next to the line numbers on the left hand side: Just try to click where the red dot appears on the screenshot. You will then have a breakpoint set at which your script should pause. Note: You can have multiple breakpoints in as many files as you'd like. Now we are all set. Go back to your browser, make sure the bug is green and just reload the page to submit the cookie with the next request. Tip: if you set a cookie, it will be available to the next request. If everything goes according to plan, this window should pop up inside of PhpStorm to inform you of an incoming debug connection: Did the window not popup for you? Let's do some troubleshooting and repeat what needs to be set in order for this to succeed: - You should find Xdebug info inside of phpinfo()'s output. If not, get the xdebug.sofile in the right place and set up your php.inifile. - Set PhpStorm DBGp settings to your IDE key e.g. "PHPSTORM" and port "9000". - Make PhpStorm listen for incoming debug connections using the red phone icon which will then turn green. - Set a breakpoint in your code, or select "Run \ Break at first line in PHP scripts" to be independent from any breakpoints. Note that this is not suited for practical use. - Get a browser extension to set the Xdebug cookie. - Make sure the browser extension has the same IDE key in it that you chose inside of your IDE. - Reload the page and PhpStorm should get the connection. If you get the dialog seen on the previous image, please accept it. This will take you into debug mode, like so: You can see that the debugger stopped the script's execution at your breakpoint, highlighting the line in blue. PHP is now waiting and controlled by Xdebug, which is being steered by your very own hands from now on. Our main workspace will be the lower section of the IDE which is already showing some information about the running script (the superglobals). And would you look at that? There's the cookie we just set to start the debugging session. You can now click through the superglobals and inspect their values at this very moment. PHP is waiting, there is no time limit, at least not the default 30 seconds. On the left side, you'll see a few buttons. For now, only "Play" and "Stop" are of interest to us. The green play button will resume the script. If there is another breakpoint in the code, the script will continue until it reaches the breakpoint and halt again. The red stop button aborts the script. Just like PHP's exit or die would do. Now the really interesting ones come in the upper section of the debug window: Let's quickly check them out: - Step Over: This means step one line ahead. - Step Into: If the blue line highlights, for example, a function call, this button let's you step through the insights of the function. - Step Out: If you stepped into a function and want to get out before the end is reached, just step out. - Run to cursor: Let's say that, for example, your file is 100 lines long and your breakpoint was set at line two in order to inspect something. Now you want to quickly run to the point where you just clicked your cursor to - this button is for you. You can click "Step over" ntimes too ;) Now don't worry, as you use Xdebug you will rapidly adapt to the shortcuts on the keyboard. Actually Debugging Some Example Code I already told you that the code you copy/pasted is falsy, so you'll need to debug it. Start stepping over the code, statement by statement. Note that the blue line only halts on lines which actually contain a command. Whitespace and comments will be skipped. Once you reach the function call to loadData, please do not step into it, just step over and halt on the if statement. You can see two new variables in the "Variables" panel on the bottom of the screen. Now, why did the $data variable return false? It seems like the script should have done its job. Let's take a look. Go back to line seven to step into the function call -> bam! We get a message informing us that we can not "step back". In order to get your debugger to line seven again, you need to stop this session and reload the page in the browser. Do so and step into the function call this time. Stop on the return statement inside of the loadData function and see what happened: The $phpData array is empty. The return statement uses a ternary operator to detect what to return. And it will return false for an empty array. Fix the line to say: return $phpData; As json_decode will either return the data or null on failure. Now stop the debug session, reload your browser, and step over the function call this time. Now it seems like we still have a problem as we step into the condition. Please fix the condition to use is_null() to detect what's going on: if (is_null($data)) { die('Could not load data'); } Now it's up to you to try and step around a bit. I would suggest to revert the script to the original falsy version, debug it with echo's and then compare how that feels in comparison to using Xdebug. Conclusion Throughout this article you should have gained a lot of new knowledge. Don't hesitate to read it again and to help a friend set up Xdebug - nothing better than that! You may want to try replacing your usual debug behavior by using Xdebug instead. Especially with larger, object-oriented projects, as they become much easier to debug and even catch up on the flow, if you don't get something right away. Note that this is just the tip of the iceberg. Xdebug offers much more power which needs to be explored as well. Please feel free to ask any questions in the comments and let me know what you think. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/xdebug-professional-php-debugging--net-34396
CC-MAIN-2018-17
refinedweb
2,964
73.68
Difference between revisions of "Control Systems Library for Python" Revision as of 17:36, 23 May 2009 This page collects some notes on a control systems library for Python. The plan is to create an alternative to the MATLAB Control System Toolbox™ that can be used in courses and for research. This page collects information about the toolbox, in preparation for actually writing some code. If you stumble across this page and know of a similar package or would like to contribute, let me know. Installation instructions IPython -pylab I'm using the IPython environment, with the the matplotlib extensions (which enables MATLAB-like plotting). I am doing all of my playing on OS X, using fink. Here's what I had to do to get the basic setup that I am using. - Install SciPy - I did this using fink. Have to use the main/unstable tree. - Install matplotlib - Need this for plotting - Install ipython - interactive python interface Small snipped of code for testing if everything is installed import from scipy * import from matlibplot * a = zeros(1000) a[:100]=1 b = fft(a) plot(abs(b)) show() Related documentation Python documentation - SciPy.org - main web site for SciPy - IPython - enhanced shell for python - matplotlib - 2D plotting for python
https://murray.cds.caltech.edu/index.php?title=Control_Systems_Library_for_Python&diff=prev&oldid=9380
CC-MAIN-2022-27
refinedweb
208
61.87
import {ZoneRegion} from 'js-joda/src/ZoneRegion.js' ZoneRegion A geographical region where the same time-zone rules apply. Time-zone information is categorized as a set of rules defining when and how the offset from UTC/Greenwich changes. These rules are accessed using identifiers based on geographical regions, such as countries or states. The most common region classification is the Time Zone Database (TZDB), which defines regions such as 'Europe/Paris' and 'Asia/Tokyo'. The region identifier, modeled by this class, is distinct from the underlying rules, modeled by ZoneRules. The rules are defined by governments and change frequently. By contrast, the region identifier is well-defined and long-lived. This separation also allows rules to be shared between regions if appropriate. Specification for implementors This class is immutable and thread-safe.
https://doc.esdoc.org/github.com/js-joda/js-joda/class/src/ZoneRegion.js~ZoneRegion.html
CC-MAIN-2021-21
refinedweb
133
50.73