text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
Let's make a PHP file that will contain our function to generate keys.
<?php //function.php /** *generate_key * *generates a random key * *@return (string) random string of 64 characters * */ function generate_key() { $length = 32; $characters = array_merge(range(0, 9), range('a', 'z'), range('A', 'Z')); shuffle($characters); return hash_hmac('sha256', substr(implode('', $characters), 0, $length), time()); } ?>
The function above uses array_merge to combine the arrays of character sets a - z, A - Z and 0 - 9 into one array. Shuffle is then used to rearrange the $characters array in a random order. The last line is where it may seem a bit confusing. Implode retrieves the array elements and makes a string out of them with the first parameter being the glue or basically the character that's going between each character in the string. So now we have a string of 62 characters in length. We now take the first 32 characters of that string and perform a SHA256 hash on it with a key being the current timestamp.
Now we'll make our the page for our login form.
<html> <head> <title>Login</title> </head> <body> <form action="login.php" method="post" enctype="application/x-www-form-urlencoded"> Username <input type="text" name="username" /><br /> Password <input type="password" name="password" /><br /> <input type="submit" value="Login" /> </form> </body> </html>
This is a simple straight forward login form with a username and password field.
This is going to be our authentication page.
<?php //login.php session_start(); require_once("functions.php"); if(isset($_POST['username'], $_POST['password'])) //verify we've got what we need to work with { /*********database_credentials.php**************/ $mysqli = new MySQLi('localhost', 'root', '', 'test'); //change to suit your database if($mysqli->errno) //connection wasn't made //handle the error here header("location: index.html"); //we'll just redirect for now /************************************************/ //We don't have to work about SQL injections here $stmt = $mysqli->prepare("SELECT COUNT(username) AS total FROM credentials WHERE username = ? AND password = ?"); //if the passwords in your table are hashed then you should apply the hashing and/or salts before passing it //they should in fact be hashed preferably with a hashing algorithm of the SHA family $stmt->bind_param('ss', $_POST['username'], $_POST['password']); $stmt->execute(); $stmt->bind_result($total); //place the result (total) into this variable $stmt->fetch(); //fill the result variable(s) binded //close all connections $stmt->close(); $mysqli->close(); //if total is equal to 1 then that means we have a match if((int)$total == 1) { session_regenerate_id(true); //delete old session variables /*********Explained below************/ $_SESSION['user'] = $_POST['username']; $key = generate_key(); $hash = hash_hmac('sha512', $_POST['password'], $key); $_SESSION['key'] = $key; $_SESSION['auth'] = hash_hmac('sha512', $key, hash_hmac('sha512', $_SESSION['user'] . (isset($_SERVER['HTTP_X_FORWARDED_FOR']) ? $_SERVER['HTTP_X_FORWARDED_FOR'] : $_SERVER['REMOTE_ADDR']), $hash)); /************************************/ if(!setcookie('hash', $hash, 0, '/')) //login isn't possible if user's browser doesn't accept cookies { session_regenerate_id(true); header("location: your_login_page.php"); //original_page would be your login page exit(); } header("location: protected_page.php"); } else { //anything else we don't care about header("location: your_login_page.php"); } } ?>
Our $hash variable will be used to store the hash of the user's password with the randomly generated key in a cookie. It is important it is stored in a cookie because basically we want to calculate the hash of 2 strings/keys to get one final hash and the server and client must have one key each. If one of those keys is incorrect by even a single character the user will not have access to protected pages.
Next we store our key within a session variable for later use. Now for the most important part of authentication, $_SESSION['auth']. This hash is comprised of 4 things, $key, username, client's IP and $hash. The client's IP address is concatenated unto the username thus deriving something like this
codeprada69.68.11.127This will be hashed with SHA512 while using $hash as the key. The hash produced from this will be used as the key to apply another SHA512 hash unto $key.
If a client is using a proxy then more times than not HTTP_X_FORWARDED_FOR will contain the client's IP instead of REMOTE_ADDR. This doesn't apply to anonymous proxies that doesn't set the value of HTTP_X_FORWARDED_FOR. The IP address is used so that if the user either changes computers but tries to re-use the same cookies (replay attack) or does anything that changes their IP the login session would not be valid.
Let's now make our checklogin.php
<?php //checklogin.php session_start(); //don't want to read a cookie or session variable that doesn't exist if(isset($_COOKIE['hash'], $_SESSION['key'], $_SESSION['user']) ) { if(strcmp($_SESSION['auth'], //remember this? hash_hmac('sha512', $_SESSION['key'], //see below for explanation hash_hmac('sha512', $_SESSION['user'] . (isset($_SERVER['HTTP_X_FORWARDED_FOR']) ? $_SERVER['HTTP_X_FORWARDED_FOR'] : $_SERVER['REMOTE_ADDR']), $_COOKIE['hash']) ) ) ) { header("location: index.html"); } } else { header("location: index.html"); } ?>
We must now rebuild the keys collected from the cookies and session and verify that the final hashed string matches $_SESSION['auth']. The key is read from $_SESSION['key'] and a SHA512 hash is applied to it with a key of the string produced from a SHA512 hash of the username concatenated with the client's IP address with a key of the hash retrieved from the cookie. This should produce the exact same string as $_SESSION['auth'] if no data was tampered with or changed.
To restrict access to a page this must be placed at the top of that page.
require_once("checklogin.php");
Here's our protected page
<?php //protected_page.php require_once("checklogin.php"); ?> <html> <head> <title>Sample Page</title> </head> <body> <h1>Welcome to the protected sample page.</h1> <h2><a href="logout.php">Logout Here</a></h2> </body> </html>
The final piece will be our logout page of course.
<?php //logout.php session_start(); session_destroy(); unset($_SESSION); setcookie("hash", "", time() - 3600); header("location: index.html"); ?> | http://www.dreamincode.net/forums/topic/234806-user-authentication-via-two-keys-ip-address/ | CC-MAIN-2016-07 | en | refinedweb |
I have to create a program which generates 10000 random numbers between 0 and 1 using the Math.random(). We then have to display the largest and smallest numbers, and the average of all the numbers. We are supposed to use a loop. Here is the code I have so far. All I've figured out how to do is generate the random numbers. If somebody could help me from here it would be much appreciated!
public class RandAnalysis { public static void main(String[] args) { for(int i = 0; i < 10000; i++) System.out.println("Random number ["+ (i+1) +"]:" + Math.random()); } } | http://www.dreamincode.net/forums/topic/294837-java-code-generate-10000-random-integers%3B-sum-max-min-and-average/ | CC-MAIN-2016-07 | en | refinedweb |
Misc #10178
refinements unactivated within refine block scope?
Description
I doubt I am seeing a bug, but I was hoping someone could clarify for me the reason why I am seeing what I see. I tried pouring over the spec and wasn't quite able to pin it down.
My use case of refinements is not the normal one, so this is not high priority by any means.
But I am curious why, if I have defined a refinement in, say, module A, and then module B is using A, if B itself as a refine block, A's refinements will not be active within it.
So:
module A refine Time def weekday self.strftime("%A") end end module B using A puts Time.now.weekday # 1 refine ActiveSupport::Time def method_missing(method, *args) puts Time.now.weekday # 2 self.to_time.send(method.to_sym, args.first) end puts Time.now.weekday # 3 end
1 and 3 will be defined, but 2 will not. Is it because according to:
"The scope of a refinement is lexical in the sense that, when control is transferred outside the scope (e.g., by an invocation of a method defined outside the scope, by load/require, etc...), the refinement is deactivated."
refine transfers control outside the scope of the module, so no matter where I put using, it will not have the refinements of A active?
I apologize for my ignorance and greatly appreciate your answers on this matter.
History
#1
[ruby-core:64598]
Updated by Nobuyoshi Nakada over 1 year ago
I can't get your point.
Module#refine requires a block, so your code doesn't work, simply.
#2
[ruby-core:64599]
Updated by Alexander Moore-Niemi over 1 year ago
Nobuyoshi Nakada wrote:
I can't get your point.
Module#refinerequires a block, so your code doesn't work, simply.
Yes, I mistakenly left out the "do" after
refine ActiveSupport::Time (which should be
ActiveSupport::TimeWithZone) and
refine Time, with it the code does indeed work, and my question still stands.
#3
[ruby-core:64601]
Updated by Alexander Moore-Niemi over 1 year ago
Here is an executable version of what I was roughing out above, I apologize for not vetting it beforehand to prevent confusion:
require 'active_support/core_ext' module A refine Time do def weekday self.strftime("%A") end end end module B using A puts Time.now.weekday # 1 refine ActiveSupport::TimeWithZone do def method_missing(method, *args) # undefined puts Time.now.weekday # 2 self.to_time.send(method.to_sym, args.first) end end puts Time.now.weekday # 3 end
With #2 in, I will error out for undefined method.
#4
[ruby-core:64604]
Updated by Nobuyoshi Nakada over 1 year ago
- Status changed from Feedback to Closed
In general, the scope inside a method definition is different from outside.
Consider method arguments and class/module level local variables.
#5
[ruby-core:64611]
Updated by Alexander Moore-Niemi over 1 year ago
Nobuyoshi Nakada wrote:
In general, the scope inside a method definition is different from outside.
Consider method arguments and class/module level local variables.
So I was correct, in that
refine invokes a different scope where the refinements aren't activated? Ok, cool.
That's kind of too bad though, because as you see in my example, it means it is harder to reuse a refinement across different object types. (In my production code I actually have to just duplicate code, which is unfortunate.) I imagine there's no plans to change that in the future, right? That plus the indirect method access (when is that going to happen?) could let me do this:
def method_missing(method, *args) if Time.respond_to?(method.to_sym) self.to_time.send(method.to_sym, args.first) end end
Thanks again for your responses.
#6
[ruby-core:64612]
Updated by Alexander Moore-Niemi over 1 year ago
I had posted some more code but remembered "send" doesn't apply yet! Sorry for my confusion. Any plans on indirect method access?
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/10178 | CC-MAIN-2016-07 | en | refinedweb |
Details
- Type:
Bug
- Status: Closed
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: 2.1.1
-
-
- Labels:
Description
The JoinColumn couldn't have the referencedColumn's definition which includes the length definition. and it's length should be assigned to the default value 255.
@Entity
public class Student {
@Id @Column(name="id", length=128, nullable=false) private String id;
@Column(name="sName", length=255) private String sName;
@ManyToMany
@JoinTable(
name="student_course_map",
joinColumns=
,
inverseJoinColumns=
)
public Collection getCourses()
...
}
@Entity
public class Courses
We can see the student id length has been defined to 128. And there is no definition length in the JoinColumn student_id. The JoinColumn should be set to the default value 255.
The warning message will occur like this
WARN [Schema] Existing column "student_id" on table "test.student_course_map" is incompatible with the same column in the given schema definition. Existing column:
Full Name: student_course_map.student_id
Type: varchar
Size: 128
Default: null
Not Null: true
Given column:
Full Name: student_course_map.student_id
Type: varchar
Size: 255
Default: null
Not Null: true
Activity
Thanks,Albert. I got your patch, and apply it into my server (geronimo-tomcat7-javaee6-3.0.0), The warning message mentioned in the jira did disappear, but another message occurred. Same message, same problem. But the difference is the column is defined in ManyToOne annotation.
The source is below.
@ManyToOne(optional=true, cascade={CascadeType.PERSIST, CascadeType.MERGE}
) @JoinColumn(name="classField") private Location schoolField;
The message is below.
2012-09-05 09:19:58,293 WARN [Schema] Existing column "classField" on table "test.classField" is incompatible with the same column in the
given schema definition. Existing column:
Full Name: classes.classField
Type: varchar
Size: 255
Default: null
Not Null: false
Given column:
Full Name: classes.classField
Type: varchar
Size: 128
Default: null
Not Null: false
XieZhi,
Can you provide a more concrete test case and the conditions reproducing the failure?
I don't think the new message is in error.
What the message saying is the classField in the database has column length 255 and whereas the id field of the @OneToMany side has column length 128. This means OpenJPA recognized the length=128 set on the id field. Before the patch, join column did not pick up the id column length and defaulted to 255, therefore this message did not happened. Either the id column has to match the join column length or the join column length need to match the id length.
This is just the reverse of the original scenario.
Albert Lee.
This problem only happens when
This problem also affects the Mapping tool creating database table operation, always assume database defect VARCHAR/CHAR length define in the database dictionary.
Attached a patch for trunk. Please try if this has resolved your issue.
For fix/commit for 2.2.x, 2.1.x and 2.0.x releases, you will need to work with IBM service channel to get this fix in these releases.
Albert Lee. | https://issues.apache.org/jira/browse/OPENJPA-2255 | CC-MAIN-2016-07 | en | refinedweb |
General Naming Conventions
The general naming conventions discuss choosing the best names for the elements in your libraries. These guidelines apply to all identifiers. Later sections discuss naming specific elements such as namespaces or properties.
Word Choice underscores, hyphens, or any other nonalphanumeric characters.
Do not use Hungarian notation.
Hungarian notation is the practice of including a prefix in identifiers to encode some metadata about the parameter, such as the data type of the identifier.
Avoid using identifiers that conflict with keywords of widely used programming languages.
While CLS-compliant languages must provide a way to use keywords as regular words, best practices dictate that you do not force developers to know how to do this. For most programming languages, the language reference documentation contains a list of the keywords used by the languages. The following table provides links to the reference documentation for some widely used programming languages.
Abbreviations and Acronyms
In general, you should not use abbreviations or acronyms. These make your names less readable. Similarly, it is difficult to know when it is safe to assume that an acronym is widely recognized.
For capitalization rules for abbreviations, see Capitalization Rules for Acronyms.
Do not use abbreviations or contractions as parts of identifier names.
For example, use OnButtonClick rather than OnBtnClick.
Do not use any acronyms that are not widely accepted, and then only when necessary.
Language-Specific Names
Do use semantically interesting names rather than language-specific keywords for type names. For example, GetLength is a better name than GetInt.
Do use a generic common language runtime (CLR) type name, rather than a language-specific name, in the rare cases when an identifier has no semantic meaning beyond its type.
For example, a method that converts data to Int16 should be named ToInt16, not ToShort because Short is the language-specific type name for Int16.
The following table shows the language-specific type names for common programming languages and the CLR counterpart.
Do use a common name, such as value or item, rather than repeating the type name, in the rare cases when an identifier has no semantic meaning and the type of the parameter is not important.
For more information on design guidelines, see the "Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries" book by Krzysztof Cwalina and Brad Abrams, published by Addison-Wesley, 2005. | https://msdn.microsoft.com/en-us/library/ms229045(v=vs.80).aspx | CC-MAIN-2016-07 | en | refinedweb |
howdy,
this code is a random letter game, it selects a letter at random then gives the user 10 chances to guess. i have not implemented th do/while loop to allow for all 10 guesses.
my question is:
this code compiles with no errors and executes all of the way through about half the time. the rest of the time it executes all of the way through and issues a segmentation fault error.
Code:#include <iostream> #include <stdlib.h> #include <time.h> #include <string.h> void guess (char* letter); using namespace std; int main () { system("clear"); char alphabet[][26] = { {'} }; srand(time(0)); { int u_l = (rand()%2); int alph_answer = (rand()%26); char alpha_rand = alphabet[u_l][alph_answer]; cout<<"Random letter: "<<alpha_rand<<endl; guess(&alpha_rand); } return 0; } void guess (char* letter) { char* alpha_guess; int guess_num = 1; unsigned int result = 0; cout<<"I am thinking of a letter\n"; cout<<"it could be upper or lower case\n"; cout<<"I'll give you 10 trys to guess what it is\n"; cout<<"Take a try.\n"; int g; int i; cin>>alpha_guess; i = *letter; g = *alpha_guess; cout<<"Guess: "<<g<<endl; cout<<"Letter: "<<i<<endl; // result = (strcmp (alpha_guess, letter)); cout<<"Result: "<<result<<endl; if(i != g) { cout<<"Wrong! Try again.\n"; cout<<"you've guessed: "<<guess_num<<" times"<<endl; cout<<"you have: "<<(10 - guess_num)<<" trys remaining"<<endl; guess_num++;} else cout<<"Correct! you got it in: "<<guess_num<<" guesses"<<endl; }
what could the problem be?
thanks
M.R. | http://cboard.cprogramming.com/cplusplus-programming/12721-runtime-error.html | CC-MAIN-2016-07 | en | refinedweb |
In this tutorial, we’ll see how you can use dotMemory to locate and fix memory leaks in your apps. But before moving on, let’s agree on what a memory leak is.
What Is a Memory Leak?
According to Wikipedia, impossible.
Contents
Sample App
Step 1. Run Profiler
Step 2. Get Snapshots
Step 3. Compare Snapshots
Step 4. Analyze Snapshot
Step 5. Check for Other Leaks
Sample App
Once again, the app we’ll use for our tutorial is Conway’s Game of Life. Please download and unpack the archive before proceeding any further.
Let’s assume we want to return some money spent on the Game of Life development and decide to add a couple of windows* that show various ads to users. Following worst practices, we show our ad windows windows use a timer (based on the
DispatcherTimer class).
You can see the implementation of the
AdWindow class in the AdWindow.cs file.
So, the feature is added and now is the best time to test it. Let’s run dotMemory and ensure that the ad window doesn't affect the app’s memory usage (in other words, it is correctly allocated and collected).
Step 1. Run Profiler
- Open the Game of Life solution in Visual Studio.
- Run dotMemory using the menu ReSharper | Profile | Profile Startup Project (Memory).
This will open the Profiler Configuration window.
- In the Profiler Configuration window, turn on Start collecting allocation data immediately. windows will appear.
- Click the Get Snapshot button in dotMemory.
This will capture the data and add the snapshot to the snapshot area. Getting a snapshot doesn’t interrupt the profiling process, thus allowing us to get another snapshot.
- Close the ad windows in our app.
- Get a snapshot one more time by clicking the Get Snapshot button in dotMemory.
- End the profiling session by closing the Game of Life app.
The main page now contains two snapshots.
Step 3. Compare Snapshots
Now, we’ll compare and contrast the two collected snapshots. What do we want to see? If everything works fine, the ad windows Group by Namespace in the list of views.
- Open the
GameOfLifenamespace.
What’s that? Two
GameOfLife.AdWindowobjects are in the Survived objects column, which means that the ad windows are still alive. After we closed the windows, the objects should have been removed from the heap. Nevertheless, something prevented them from being collected.
It’s time to start our investigation and find out why our windows were not removed!
Step 4. Analyze Snapshot
As mentioned in Tutorial 1 - Getting Started with dotMemory*,object set consisting of two objects. To do this, click the number 2 in the Survived objects column next to the
GameOfLife.AdWindowclass.
As the object exists in both snapshots, dotMemory will prompt you to specify in which snapshot the object should be shown. Of course, we’re interested in the last snapshot where the windows should have been collected.
- Select Open “Survived Objects” in the newer snapshot and click OK.
This will show the object set “All objects of the AdWindow class that exist both in snapshot #1 and #2” in the Type List view. According to the view, the object set contains 2 instances with the shallow size of 952 B. These instances exclusively retain other objects with the total size of 10,676 B.
We’re interested not in the
AdWindowobjects themselves, but in those that retain our ad windows in memory. To figure this out, we should look at the selected object set using the Group by Dominators view. This will show us dominators---the objects that exclusively retain our ad windows in memory.
- To view the list of dominators for the object set, click Group by Dominators in the list of views.
As you can see, ad windows are retained in memory by event handlers, which, in turn, are referenced by instances of the
DispatcherTimerclass. Let’s continue our investigation and try to find more details about those timer objects.
- Right click the DispatcherTimer object set in the list and select Open this object set.
This will open the
DispatcherTimerobject set* in the Type List view. Now, our goal is to understand how these timers relate to the
AdWindowobjects. In other words, how the timers reference our ad windows. To get this info, we should dive deeper and take a look at the specific instance of the
DispatcherTimerclass.
- Open the Instances view and double click any instance of the
DispatcherTimerclass. It doesn't really matter which one you choose, as they obviously have the same relationship with our ad windows.
By default, the instance is shown using the Outgoing References view. This view is used to get the details on the instance’s fields and references.
As you remember, the ad windows are retained by event handlers, which, in turn, are referenced by the
DispatcherTimerinstances. The Outgoing References view shows how exactly this is happening---the ad window is referenced through the
Tickevent handler. It appears that the
AdWindowinstances are subscribed to the
Tickevent of the timers. Let’s look at this in the code.
- To quickly find the required call in the code, let’s use dotMemory. Simply switch to the Creation Stack Traces view.
Here it is! The latest call in the stack that actually creates the timer is the AdWindow constructor. Let’s find it in the code.
- Switch to Visual Studio with the GameOfLife solution and locate the AdWindow constructor:
- Now, to make sure the leak is fixed, let’s build our solution and run the profiling again. Click the Profile item in dotMemory’s menu and repeat Step 2. Get Snapshots and Step 3. Compare Snapshots.
That’s it! The
AdWindowinstances are now in the Dead objects column which means they were area on the Snapshot Overview page, you’ll notice that dotMemory has an Event handlers leak check that already contains our
AdWindow objects.
Step 5. Check for Other Leaks
We've fixed the event handler leak, and the ad windows are now successfully collected by GC. But what about the timers that caused our problems? If everything works fine, the timers should be collected as well and should be absent in the second snapshot. Let’s take a look.
- Open the second snapshot in dotMemory. To do this, click the GameOfLife.exe step (the beginning of your investigation) 8
DispatcherTimerobjects in the heap.
- Open the
DispatcherTimerobject set by double clicking it.
This will open the set in the Type List view. Now, we need to ensure that this set doesn’t contain the timers created by the ad windows. As the timers were created in the AdWindow constructor, the easiest way to do this is to look at the set using the Group by Creation Stack Trace view.
- timers created by this call were not collected. They exist in the snapshot regardless of the fact that the ad windows were closed and removed from memory. This looks like one more memory leak that we should analyze.
- Double click the AdWindow.ctor(Window owner) call.
dotMemory will show us the object set (consisting of two timers) in the Type List view.
To figure out what retains the timers in memory, let’s look at the Group by Dominators view.
- Click Group by Dominators in the list of views.
The list of dominators contains just one row, Not exclusively retained objects, which means that each timer is retained in memory by more than one object.
In such cases, the best solution is to look at main retention paths of such 'not exclusively retained' object. For this purpose, dotMemory has a view called Group by Similar Retention.
- Click Group by Similar Retention in the list of views.
The Group by Similar Retention view groups objects in a set by similarity of their retention paths. In addition, this view shows the two most dissimilar retention paths for each group. Typically, this is enough to understand what prevents your object from being collected.
- Click any timer in the list.
As you can see, our timers have slightly different retention paths. In fact, they differ only in one additional
PriorityItemobject; therefore, in our example there's no big difference which of the timer instances to analyze.
The first retention path of our timers leads us to the
DispatcherTimerlist, which is global and stores all timers in the app.:
- Rebuild the solution.
- Click the Profile item in the dotMemory’s menu and repeat Step 2. Get Snapshots and Step 3. Compare Snapshots.
- Open the second snapshot in the Type List view.
As you can see, there are only 6
DispatcherTimerobjects instead of 8 in the snapshot where we determined the leak. To ensure that GC collected the timers used by the ad windows, let’s look at these timers from the Group by Creation Stack Trace view.
- Double click the DispatcherTimer objects and then click the Back Traces link in the list of views.
Great! There is no AdWindow constructor in the list, which means that the leak has been successfully fixed.
Of course, this type of leak doesn’t seem critical, especially for our app. If we didn’t use dotMemory, we may have never even noticed the issue. Nevertheless, in other apps (for example, server-side ones working 24/7) this leak could manifest itself after some time by causing an OutOfMemory exception. | http://confluence.jetbrains.com/display/NETCOM/Tutorial+2+-+How+to+Find+a+Memory+Leak+with+dotMemory | CC-MAIN-2016-07 | en | refinedweb |
{-| Low-level messaging between this client and the MongoDB server, see Mongo Wire Protocol (<>). This module is not intended for direct use. Use the high-level interface at "Database.MongoDB.Query" and "Database.MongoDB.Connection" instead. -} {-# LANGUAGE RecordWildCards, StandaloneDeriving, OverloadedStrings, FlexibleContexts, TupleSections, TypeSynonymInstances, MultiParamTypeClasses, FlexibleInstances, UndecidableInstances #-} module Database.MongoDB.Internal.Protocol ( FullCollection, -- * Pipe Pipe, newPipe, send, call, -- ** Notice Notice(..), InsertOption(..), UpdateOption(..), DeleteOption(..), CursorId, -- ** Request Request(..), QueryOption(..), -- ** Reply Reply(..), ResponseFlag(..), -- * Authentication Username, Password, Nonce, pwHash, pwKey ) where import Prelude as X import Control.Applicative ((<$>)) import Control.Arrow ((***)) import Data.ByteString.Lazy as B (length, hPut) import System.IO.Pipeline (IOE, Pipeline, newPipeline, IOStream(..)) import qualified System.IO.Pipeline as P (send, call) import System.IO (Handle, hClose) import Data.Bson (Document, UString) import Data.Bson.Binary import Data.Binary.Put import Data.Binary.Get import Data.Int import Data.Bits import Data.IORef import System.IO.Unsafe (unsafePerformIO) import qualified Crypto.Hash.MD5 as MD5 (hash) import Data.UString as U (pack, append, toByteString) import System.IO.Error as E (try) import Control.Monad.Error import System.IO (hFlush) import Database.MongoDB.Internal.Util (whenJust, hGetN, bitOr, byteStringHex) -- * Pipe type Pipe = Pipeline Response Message -- ^ Thread-safe TCP connection with pipelined requests newPipe :: Handle -> IO Pipe -- ^ Create pipe over handle newPipe handle = newPipeline $ IOStream (writeMessage handle) (readMessage handle) (hClose handle) send :: Pipe -> [Notice] -> IOE () -- ^ Send notices as a contiguous batch to server with no reply. Throw IOError if connection fails. send pipe notices = P.send pipe (notices, Nothing) call :: Pipe -> [Notice] -> Request -> IOE (IOE Reply) -- ^ Send notices and request as a contiguous batch to server and return reply promise, which will block when invoked until reply arrives. This call and resulting promise will throw IOError if connection fails. call pipe notices request = do requestId <- genRequestId promise <- P.call pipe (notices, Just (request, requestId)) return $ check requestId <$> promise where check requestId (responseTo, reply) = if requestId == responseTo then reply else error $ "expected response id (" ++ show responseTo ++ ") to match request id (" ++ show requestId ++ ")" -- * Message type Message = ([Notice], Maybe (Request, RequestId)) -- ^ A write notice(s) with getLastError request, or just query request. -- Note, that requestId will be out of order because request ids will be generated for notices after the request id supplied was generated. This is ok because the mongo server does not care about order just uniqueness. writeMessage :: Handle -> Message -> IOE () -- ^ Write message to socket writeMessage handle (notices, mRequest) = ErrorT . E.try $ do forM_ notices $ \n -> writeReq . (Left n,) =<< genRequestId whenJust mRequest $ writeReq . (Right *** id) hFlush handle where writeReq (e, requestId) = do hPut handle lenBytes hPut handle bytes where bytes = runPut $ (either putNotice putRequest e) requestId lenBytes = encodeSize . toEnum . fromEnum $ B.length bytes encodeSize = runPut . putInt32 . (+ 4) type Response = (ResponseTo, Reply) -- ^ Message received from a Mongo server in response to a Request readMessage :: Handle -> IOE Response -- ^ read response from socket readMessage handle = ErrorT $ E.try readResp where readResp = do len <- fromEnum . decodeSize <$> hGetN handle 4 runGet getReply <$> hGetN handle len decodeSize = subtract 4 . runGet getInt32 type FullCollection = UString -- ^ Database name and collection name with period (.) in between. Eg. \"myDb.myCollection\" -- ** Header type Opcode = Int32 type RequestId = Int32 -- ^ A fresh request id is generated for every message type ResponseTo = RequestId genRequestId :: (MonadIO m) => m RequestId -- ^ Generate fresh request id genRequestId = liftIO $ atomicModifyIORef counter $ \n -> (n + 1, n) where counter :: IORef RequestId counter = unsafePerformIO (newIORef 0) {-# NOINLINE counter #-} -- *** Binary format putHeader :: Opcode -> RequestId -> Put -- ^ Note, does not write message length (first int32), assumes caller will write it putHeader opcode requestId = do putInt32 requestId putInt32 0 putInt32 opcode getHeader :: Get (Opcode, ResponseTo) -- ^ Note, does not read message length (first int32), assumes it was already read getHeader = do _requestId <- getInt32 responseTo <- getInt32 opcode <- getInt32 return (opcode, responseTo) -- ** Notice -- | A notice is a message that is sent with no reply data Notice = Insert { iFullCollection :: FullCollection, iOptions :: [InsertOption], iDocuments :: [Document]} | Update { uFullCollection :: FullCollection, uOptions :: [UpdateOption], uSelector :: Document, uUpdater :: Document} | Delete { dFullCollection :: FullCollection, dOptions :: [DeleteOption], dSelector :: Document} | KillCursors { kCursorIds :: [CursorId]} deriving (Show, Eq) data InsertOption = KeepGoing -- ^ If set, the database will not stop processing a bulk insert if one fails (eg due to duplicate IDs). This makes bulk insert behave similarly to a series of single inserts, except lastError will be set if any insert fails, not just the last one. (new in 1.9.1) deriving (Show, Eq) data UpdateOption = Upsert -- ^ If set, the database will insert the supplied object into the collection if no matching document is found | MultiUpdate -- ^ If set, the database will update all matching objects in the collection. Otherwise only updates first matching doc deriving (Show, Eq) data DeleteOption = SingleRemove -- ^ If set, the database will remove only the first matching document in the collection. Otherwise all matching documents will be removed deriving (Show, Eq) type CursorId = Int64 -- *** Binary format nOpcode :: Notice -> Opcode nOpcode Update{} = 2001 nOpcode Insert{} = 2002 nOpcode Delete{} = 2006 nOpcode KillCursors{} = 2007 putNotice :: Notice -> RequestId -> Put putNotice notice requestId = do putHeader (nOpcode notice) requestId case notice of Insert{..} -> do putInt32 (iBits iOptions) putCString iFullCollection mapM_ putDocument iDocuments Update{..} -> do putInt32 0 putCString uFullCollection putInt32 (uBits uOptions) putDocument uSelector putDocument uUpdater Delete{..} -> do putInt32 0 putCString dFullCollection putInt32 (dBits dOptions) putDocument dSelector KillCursors{..} -> do putInt32 0 putInt32 $ toEnum (X.length kCursorIds) mapM_ putInt64 kCursorIds iBit :: InsertOption -> Int32 iBit KeepGoing = bit 0 iBits :: [InsertOption] -> Int32 iBits = bitOr . map iBit uBit :: UpdateOption -> Int32 uBit Upsert = bit 0 uBit MultiUpdate = bit 1 uBits :: [UpdateOption] -> Int32 uBits = bitOr . map uBit dBit :: DeleteOption -> Int32 dBit SingleRemove = bit 0 dBits :: [DeleteOption] -> Int32 dBits = bitOr . map dBit -- ** Request -- | A request is a message that is sent with a 'Reply' expected in return data Request = Query { qOptions :: [QueryOption], qFullCollection :: FullCollection, qSkip :: Int32, -- ^ Number of initial matching documents to skip qBatchSize :: Int32, -- ^ The number of document to return in each batch response from the server. 0 means use Mongo default. Negative means close cursor after first batch and use absolute value as batch size. qSelector :: Document, -- ^ \[\] = return all documents in collection qProjector :: Document -- ^ \[\] = return whole document } | GetMore { gFullCollection :: FullCollection, gBatchSize :: Int32, gCursorId :: CursorId}. | SlaveOK -- ^ Allow query of replica slave. Normally these return an error except for namespace "local". | NoCursorTimeout -- ^ The server normally times out idle cursors after 10 minutes to prevent a memory leak in case a client forgets to close a cursor. Set this option to allow a cursor to live forever until it is closed. | AwaitData -- ^ Use with TailableCursor. If we are at the end of the data, block for a while rather than returning no data. After a timeout period, we do return as normal. -- | Exhaust -- ^ Stream the data down full blast in multiple "more" packages, on the assumption that the client will fully read all data queried. Faster when you are pulling a lot of data and know you want to pull it all down. Note: the client is not allowed to not read all the data unless it closes the connection. -- Exhaust commented out because not compatible with current `Pipeline` implementation | Partial -- ^ Get partial results from a _mongos_ if some shards are down, instead of throwing an error. deriving (Show, Eq) -- *** Binary format qOpcode :: Request -> Opcode qOpcode Query{} = 2004 qOpcode GetMore{} = 2005 putRequest :: Request -> RequestId -> Put putRequest request requestId = do putHeader (qOpcode request) requestId case request of Query{..} -> do putInt32 (qBits qOptions) putCString qFullCollection putInt32 qSkip putInt32 qBatchSize putDocument qSelector unless (null qProjector) (putDocument qProjector) GetMore{..} -> do putInt32 0 putCString gFullCollection putInt32 gBatchSize putInt64 gCursorId qBit :: QueryOption -> Int32 qBit TailableCursor = bit 1 qBit SlaveOK = bit 2 qBit NoCursorTimeout = bit 4 qBit AwaitData = bit 5 --qBit Exhaust = bit 6 qBit Partial = bit 7 qBits :: [QueryOption] -> Int32 qBits = bitOr . map qBit -- ** Reply -- | A reply is a message received in response to a 'Request' data Reply = Reply { rResponseFlags :: [ResponseFlag], rCursorId :: CursorId, -- ^ 0 = cursor finished rStartingFrom :: Int32, rDocuments :: [Document] } deriving (Show, Eq) data ResponseFlag = CursorNotFound -- ^ Set when getMore is called but the cursor id is not valid at the server. Returned with zero results. | QueryError -- ^ Query error. Returned with one document containing an "$err" field holding the error message. | AwaitCapable -- ^ For backward compatability: Set when the server supports the AwaitData query option. if it doesn't, a replica slave client should sleep a little between getMore's deriving (Show, Eq, Enum) -- * Binary format replyOpcode :: Opcode replyOpcode = 1 getReply :: Get (ResponseTo, Reply) getReply = do (opcode, responseTo) <- getHeader unless (opcode == replyOpcode) $ fail $ "expected reply opcode (1) but got " ++ show opcode rResponseFlags <- rFlags <$> getInt32 rCursorId <- getInt64 rStartingFrom <- getInt32 numDocs <- fromIntegral <$> getInt32 rDocuments <- replicateM numDocs getDocument return (responseTo, Reply{..}) rFlags :: Int32 -> [ResponseFlag] rFlags bits = filter (testBit bits . rBit) [CursorNotFound ..] rBit :: ResponseFlag -> Int rBit CursorNotFound = 0 rBit QueryError = 1 rBit AwaitCapable = 3 -- * Authentication type Username = UString type Password = UString type Nonce = UString pwHash :: Username -> Password -> UString pwHash u p = pack . byteStringHex . MD5.hash . toByteString $ u `U.append` ":mongo:" `U.append` p pwKey :: Nonce -> Username -> Password -> UString pwKey n u p = pack . byteStringHex . MD5.hash . toByteString . U.append n . U.append u $ pwHash u p {-. -} | http://hackage.haskell.org/package/mongoDB-1.0.1/docs/src/Database-MongoDB-Internal-Protocol.html | CC-MAIN-2016-07 | en | refinedweb |
google_fonts 3.0.1
google_fonts: ^3: ^3.0.1
Alternatively, your editor might support
flutter pub get. Check the docs for your editor to learn more.
Import it
Now in your Dart code, you can use:
import 'package:google_fonts/google_fonts.dart'; | https://pub.dev/packages/google_fonts/install | CC-MAIN-2022-40 | en | refinedweb |
Abstraction In Apex
Hiding the complexity and given the functionality known as Abstraction.
Real time example of Abstraction: human beings are speaking because GOD given the functionality to them for speaking but how they are speaking and what all their internal body organs they are using for speaking is completely hidden from them so finally we can say they have a functionality of speaking but complexity is completely hidden from them.
Note: Always make sure, abstraction is always for end users not for developers.
In Apex, abstraction can be achieved via 2 ways:
- Abstract Class
- Interface
Abstract Class: There are some points which you always remember while working on abstract class as below:
- If you are making a class as abstract then there is no rule that you guys need to have abstract method inside the abstract class but vice-versa is not true i.e. if you guys having an abstract method inside a class in this case you guys need to declare that class as abstract.
- An abstract class can have abstract methods (a method without body is known as abstract method) as well as concrete methods (a method with body is known as concrete method) too.
- We can have a constructor in abstract class but we cannot instantiate the abstract class which means we cannot create object of abstract class.
- An abstract class always works with the conjunction of child class which means it is a responsibility of child class to override the abstract method of abstract parent class.
Note: If you want that immediate child class will not override the abstract method of parent class in this case you need declare a child class a abstract class.
Syntax of an Abstract Class
public abstract class ApexAbstruct { public ApexAbstruct() { } // default constructor public abstract void show(); // abstract method, having no body public void dispaly(){ // concrete method, having body. System.debug(‘This is normal method ‘); } }
Implemented/Child Class
public class ApexClass extends ApexAbstruct{ //Override the abstract method of parent class public override void show(){ System.debug(‘This is an Astruction example’); } }
Cretae an object and call the methos=d.
ApexClass ap = new ApexClass(); ap.show();
Interfaces: There are some points which you always remember while working on interface as below:
- All the methods of interfaces are by default abstract Which means you do not need to explicitly write abstract keyword in front of method.
- No need to define access specifier in front of method because all the methods of interfaces are by default is global.
- An interface always extends another interface.
- A class always implements an interface.
- It is a responsibility of child class to override all the methods of implemented interface as all the methods of interface are by default is abstract.
Note: Interfaces are used to achieve 100% abstraction.
Syntax of an Interface
public interface ApexInterface{ void show(); }
Implemented class
public class Apex implements ApexInterface{ public void show(){ System.debug(‘This is Apex interface example’); } }
Create an object and call the method
Apex ap = new Apex(); ap.show();
Output:
| https://salesforcedrillers.com/learn-salesforce/abstraction-in-apex/ | CC-MAIN-2022-40 | en | refinedweb |
This post will demonstrate you to create Web API, which can be, accessed across different domain also known as CORS (Cross-Origin Resource Sharing).
Create Visual Studio Solution.
Select Visual Studio Web Application Project.
I am going to keep name as Nilesh.WebAPISQLCORS as this Web API will call SQL Server Stored Procedure to Insert Data into SQL Table.
Select Project Type Empty and Core reference as Web API.
Click OK.
It will create Project Solution as shown in below image.
Now let’s add Controller Action.
Select Web API 2 Controller Empty
Click on Add. Now Add a Name for Controller. I am going to keep name as Employee
It will add controller class in folder as shown below
Now lets adds method for Get and Post Action
using System; using System.Collections.Generic; using System.Data; using System.Data.SqlClient; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; namespace Nilesh.WebAPISQLCORS.Controllers { public class DataInfo { public string Name { get; set; } public string Email { get; set; } public string ContactNumber { get; set; } } public class EmployeeController : ApiController { string connectionString = System.Configuration.ConfigurationManager.AppSettings["SQLServerConnectionString"]; // GET: api/New public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } // POST: api/New [ActionName("AddEmployee")] [HttpPost] public string Post([FromBody]DataInfo value) { using (SqlConnection con = new SqlConnection(connectionString)) { SqlCommand cmd = new SqlCommand("spInsertEmployee", con); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.AddWithValue("@Name", value.Name); cmd.Parameters.AddWithValue("@Email", value.Email); cmd.Parameters.AddWithValue("@ContactNumber", value.ContactNumber); con.Open(); cmd.ExecuteNonQuery(); con.Close(); } return "Item successfully added."; } } }
Note that I have already added SQLServerConnectionString Configuration in Web.Config File.
Now we need to add Web API Cors so that we can access it from Cross Origin in other words from other application.
To add CORS in Web API navigate to Manage Nuget Packages
Navigate to Browse and search for CORS. Select Microsoft.AspNet.WebAPI.Cors Package and Click on Install
It will show you changes which will update solution
Click on OK button
Next Accept License
Once you accept License it will Add Cors and update Dll Reference in Project you can check this progress in Output Window.
Now lets add Configuration for Cors in WebApiConfig.cs File
Add Below Code to Enable Cors
EnableCorsAttribute cors = new EnableCorsAttribute("*", "*", "*"); config.EnableCors(cors);
Note that to add EnableCorsAttribute you need to add reference for System.Web.Http.Cors
Also in EnableCorsAttribute Method, you can pass parameter for Specific Origin and action whom you want to allow from different origin for demo purpose I have added * which means allow all for everything , in real scenario you can specify parameter name as per your requirement
By applying this code now our WEB API is ready to access it from other origin, hit F5 to test it.
You can check Get Action Response using URL:
(You need to change application Port number from URL)
Now to check POST request let use Postman
Make sure you pass input parameter value and Action name in Poster as shown in below Image
Sample JSON Parameter Format
{ Name: "Nilesh Rathod", Email: "njrathod@gmail.com" , ContactNumber: "+919825899769" }
Once you enter all information in Postman Click on Sent button it will post all required information to WebAPI and create record using AddEmployee Action in SQL Server.You can also get Action response in Body Section as Shown in an Image.
Now lets host our Web API to any other location instead of localhost and can share its url to other application so that they can use it.
Note that this code is not for Production Purpose, as it has not implemented with other features.
For demonstration, I have hosted this Web API Project on Azure Environment.
I have taken Azure trial for demonstration Lets check our Azure Web APP.
Next Download Publishing Profile for Web APP (which we need to use in next step to publish application )
Next Click move to Visual Studio Web API Project and Click on Publish.
Pick Azure Publish Target and Import profile which we download in previous step.
Once you select profile, it will immediately start publishing.You can see its progress in Output Window
Next step is to check Get Request
We are able to receive response. Now let’s try Post Request using Postman
It confirms that Web API is working as per expectation on Production Environment. | https://njrathod.wordpress.com/2019/01/ | CC-MAIN-2022-40 | en | refinedweb |
For this walkthrough, I will be demonstrating how to incorporate Python and a variety of AWS services into a simple, serverless workflow. The idea is to create an SQS queue using Python, then using an HTTP API to trigger a Lambda function that sends messages to the queue.
Below are the steps I will be taking to create this workflow:
Create a Standard SQS Queue using Python.
Create a Lambda function in the console with a Python 3.7 or higher runtime
Modify the Lambda to send a message to the SQS queue. The message should contain the current time. Use the built-in test function for testing.
Create an API Gateway HTTP API type trigger.
Test the trigger to verify the message was sent.
Prerequisites:
AWS IAM user with administrative access
an IDE (interactive development environment) of choice configured with AWS CLI, Python, and boto3 installed — For this exercise, I am using AWS Cloud9 which already utilizes the AWS CLI and comes with Python pre-installed. I have already installed boto3 prior to this exercise.
- Create a Standard SQS Queue using Python
Using your IDE, create your SQS queue — refer to the boto3 documentation for additional clarification.
After entering the script for creating the SQS queue, click “Run” to generation the URL of your queue. Verify the queue is created by pasting the url in your browser.
# Get the service resource sqs = boto3.resource('sqs') # Create the queue. This returns an SQS.Queue instance queue = sqs.create_queue(QueueName='queue-name') # You can now access identifiers and attributes print(queue.url)
To verify the queue was created, either copy the URL created in your IDE or navigate to SQS from the AWS Management Console and select "Queues" from the dashboard.
SQQ → Queues → select the newly-created queue and copy the ARN for later — it will be needed when editing the permissions policy of the lambda execution role.
2. Create a Lambda function in the console with a Python 3.7 or higher runtime
From the AWS Management Console, navigate to Lambda and click “Create function”. Provide your function a name and select the appropriate runtime — I selected Python3.7.
Select “Author from scratch” and provide a name for the function, select the runtime (Python3.7 or higher), and select the option to make a new execution role. Select “Create function”.
Once the function has been created, the new execution role must be modified to allow access to SQS so that when triggered. From the AWS Management Console, navigate to IAM, select “Roles” from the dashboard, and search for the newly created execution role by typing in the function name — it should populate automatically.
Search for the execution role that was created with the function and select it.
Click on the policy name to edit the policy directly.
The execution policy that was created with the function does not include permissions in the policy that would allow Lambda access to SQS, so we need to add that portion with the SQS queue ARN to the existing policy.
Click on the “Edit policy” and click on the JSON tab to view and edit the policy. Add the SQS permissions — to send messages to SQS — and include the ARN of the SQS queue in the Resource element.
Click "Review policy" and "Save changes" before navigating back to Lambda.
3. Modify the Lambda to send a message to the SQS queue. The message should contain the current time. Use the built-in test function for testing.
While in Lambda, open the function you recently created to view the Code source. Using the gist below, modify the existing Lambda function to send a message to SQS that contains the current time*.
Note: The code to generate the current time produces a result in UTC, or Coordinated Universal Time. Working from the state of Virginia, I am composing this walkthrough during the month of May, which falls during Eastern Daylight Time. For me, UTC is currently 4 hours ahead, or noted as UTC -4. When Daylight Savings Time ends in my timezone, I will be on Eastern Standard Time, or UTC -5.
import json import boto3 from datetime import datetime def lambda_handler(event, context): now = datetime.now() current_time = now.strftime("%H:%M:%S %p") sqs = boto3.client('sqs') sqs.send_message( QueueUrl="", MessageBody=current_time ) return { 'statusCode': 200, 'body': json.dumps(current_time) } view rawgistfile1.txt hosted with ❤ by GitHub
Be sure to include the URL of the SQS queue created in the first step of this walkthrough.
Be sure to click “Deploy” to save any changes made in order to apply them to the test. Test the function code by configuring a “test event”.
After clicking “Test”, there will be a prompt to “Configure Test Event”.
I named and configured a test event using the template called API Gateway — AWS Proxy as I will be using an HTTP API trigger for this function.
**(The template for SQS only tests the Lambda receiving a message from a queue rather than sending a message. I can easily check my queue from the SQS dashboard to ensure a message was sent from the Lambda function after testing it.)
After naming and configuring the test event, click “Save” and proceed to test the function.
The “Status” and “Response” portions of the execution results should show that the test was successful and that the current time was returned from the function.
4. Create an API Gateway HTTP API type trigger.
While still on the Lambda function page, click "Add trigger" under the Function overview heading to create and configure the API Gateway trigger.
Add the trigger to the function.
5. Test the trigger to verify the message was sent.
On the Lambda function page, select the “Configuration” tab and then “Triggers” from the side dashboard. Copy and paste the API endpoint URL into the browser to invoke the function.
Once the API endpoint URL has been accessed, two thing should happen: the user should be able to view the current time in the browser, and the message should appear in the SQS queue created at the beginning of the walkthrough.
The endpoint URL should return the UTC time.
The queue gains a message with each function invocation. (Yes, I tested this 26 times…)
In closing, this exercise was to help demonstrate how a simple serverless workflow can be configured with AWS services and Python. | https://plainenglish.io/blog/triggering-lambda-to-send-messages-to-sqs | CC-MAIN-2022-40 | en | refinedweb |
In this article, I have explained how you can create your first java program, using Java "Hello World" program example. Simply by writing your first program code in notepad and then using the command prompt to compile it & show output.
Before we begin, you need to follow these steps, if you haven't done it yet.
- Install the JDK
- Set path of the JDK/bin directory
To set the path of JDK, you need to follow the following steps:
- Go to MyComputer properties -> advanced tab -> environment variables -> new tab of user variable -> write path in variable name -> write path of bin folder in variable value -> ok -> ok -> ok
After completing the above procedure.
Creating First Hello World program in Java
In this example, we'll use Notepad. it is a simple editor included with the Windows Operating System. You can use a different text editor like NotePad++
Your first application, HelloWorld, will simply display the greeting " Hello World ". To create this program, you will to follow these steps:
- Open Notepad from the Start menu by selecting Programs -> Accessories -> Notepad.
- Create a source file: A source file contains code, written in the Java programming language, that you and other programmers can understand.
The easiest way to write a simple program is with a text editor.
So, using the text editor of your choice, create a text file with the following text, and be sure to name the text file HelloWorld.java.
Java programs are case-sensitive, so if you type the code in yourself, pay particular attention to the capitalization.
public class HelloWorld{ public static void main(String args[]){ System.out.println("Hello World"); } }
- Save the file as HelloWorld.java make sure to select file type as all files while saving the file in our working folder C:\workspace
- Open the command prompt. Go to Directory C:\workspace. Compile the code using the command,
javac HelloWorld.java
Compiling a Java program means taking the programmer-readable text in your program file (also called source code) and converting it to bytecodes, which are platform-independent instructions for the JVM.
The Java programming language compiler (javac) takes your source file and translates its text into instructions that the Java virtual machine can understand. The instructions contained within this file are known as bytecodes.
- Now type 'java HelloWorld' on the command prompt to run your program
You will be able to see "Hello World" printed on your command prompt.
Once your program successfully compiles into Java bytecodes, you can interpret and run applications on any Java VM, or interpret and run applets in any Web browser with built in JVM.Interpreting and running a Java program means invoking the Java VM byte code interpreter, which converts the Java byte codes to platform-dependent machine codes so your computer can understand and run the program.
The Java application launcher tool (java) uses the Java virtual machine to run your application.
And then when you try to run the byte code(.class file), the following steps are performed at runtime:
- Class loader loads the java class. It is subsystem of JVM Java Virtual machine.
- Byte Code verifier checks the code fragments for illegal codes that can violate access right to the object.
- The interpreter reads the byte code stream and then executes the instructions, step by step.
Understanding the HelloWorld.java code (With few important points)
- Any java source file can have multiple classes but they can have only one public class.
- The java source file name should be same as public class name. That’s why the file name we saved our program was HelloWorld.java
- class keyword is used to declare a class in java
- public keyword is an access modifier which represents visibility, it means this function is visible to all.
- static is a keyword, used to make a static method. The advantage of static method is that there is no need to create object to invoke the static method. The main() method here is called by JVM, without creating any object for class.
- void is the return type of the method, it means this method will not return anything.
- Main: main() method is the most important method in a Java program. It represents startup of the program.
- String[] args is used for command line arguments.
- System.out.println() is used to print statements on the console.
- When we compile the code, it generates bytecode and saves it as Class_Name.class extension. If you look at the directory where we compiled the java file, you will notice a new file created HelloWorld.class
- When we execute the class file, we don’t need to provide a full file name. We need to use only the public class name.
- When we run the program using java command, it loads the class into JVM and looks for the main function in the class and runs it. The main function syntax should be same as specified in the program, else it won’t run and throw an exception as Exception in thread "main" java.lang.NoSuchMethodError: main.
Creating Hello World Java program in Eclipse
In the above example, you were using a command prompt to compile Java program, but you can also use Java IDE like Eclipse, which helps in making development easier.
Here are the steps to follow for creating your first Java program in Eclipse
- Download Eclipse IDE
- Install Eclipse in your Machine
- Now open Eclipse, and then create a new Java project by navigating to "File"-> "New" -> "Project" -> Select "Java Project"
- Now in the next screen, give the project name "HelloWorld" and click "Finish". (If you see Perspective window click "Open")
- Now, you can see "src" in the left-pane, right-click on it, select "New" -> Select "Package" and name your package "helloworld" and click "Finish".
- Create a class inside package, by right-clicking on "helloworld" package, we just created in above step, then right-click on it, select "New" -> "Class", name it "FirstHelloWorldProgram" and click "Finish".
- Now, you can use the below code in the class
package helloworld; public class FirstHelloWorldProgram { public static void main(String args[]){ System.out.println("Hello World"); } } ?
- Once you have copy-pasted, above code, save your file and then click on "Green" button, to run your program and show output in the console.
That's it, I have already explained about the code sample above.
You may also like:
How to open console in Eclipse?Various Java programming examples with output
Pyramid Triangle pattern programs in Java with explanation
Java program to reverse a string (Different ways explained)
Leap year program in Java (multiple ways)
Fibonacci series program in Java (With and without recursion) | https://qawithexperts.com/article/java/hello-world-program-in-java-your-first-java-program/196 | CC-MAIN-2022-40 | en | refinedweb |
Admittedly, I present in this post a few small improvements to templates and to C++20 in general. Although these improvements may seem not so impressive to you, they make C++20 more consistent and, therefore, less error-prone when you program generic.
Today's post is about conditionally explicit constructors and new non-type template parameters.
Sometimes, you want to have a class which should have constructors accepting various different types. For example, you have a class VariantWrapper which holds a std::variant accepting various different types.
class VariantWrapper {
std::variant<bool, char, int, double, float, std::string> myVariant;
};
To initialize the myVariant with bool, char, int, double, float, or std::string, the class VariantWrapper needs constructors for each listed type. Laziness is a virtue - at least for programmer - , therefore, you decide to make the constructor generic.
The class Implicit exemplifies a generic constructor.
// explicitBool.cpp
#include <iostream>
#include <string>
#include <type_traits>
struct Implicit {
template <typename T> // (1)
Implicit(T t) {
std::cout << t << std::endl;
}
};
struct Explicit {
template <typename T>
explicit Explicit(T t) { // (2)
std::cout << t << std::endl;
}
};
int main() {
std::cout << std::endl;
Implicit imp1 = "implicit";
Implicit imp2("explicit");
Implicit imp3 = 1998;
Implicit imp4(1998);
std::cout << std::endl;
// Explicit exp1 = "implicit"; // (3)
Explicit exp2{"explicit"}; // (4)
// Explicit exp3 = 2011; // (3)
Explicit exp4{2011}; // (4)
std::cout << std::endl;
}
Now, you have an issue. A generic constructor (1) is a catch-all constructor because you can invoke them with any type. The constructor is way too greedy. By putting an explicit in front of the constructor (2). the constructor becomes explicit. This means that implicit conversions (3) are not valid anymore. Only the explicit calls (4) are valid.
Thanks to Clang 10, here is the output of the program:
This is not the and of the story. Maybe, you have a type MyBool that should only support the implicit conversion from bool, but no other implicit conversion. In this case, explicit can be used conditionally.
// myBool.cpp
#include <iostream>
#include <type_traits>
#include <typeinfo>
struct MyBool {
template <typename T>
explicit(!std::is_same<T, bool>::value) MyBool(T t) { // (1)
std::cout << typeid(t).name() << std::endl;
}
};
void needBool(MyBool b){ } // (2)
int main() {
MyBool myBool1(true);
MyBool myBool2 = false; // (3)
needBool(myBool1);
needBool(true); // (4)
// needBool(5);
// needBool("true");
}
The explicit(!std::is_same<T, bool>::value) expression guarantees that MyBool can only implicitly created from a bool value. The function std::is_same is a compile-time predicate from the type_traits library. Compile-time predicate means, std::is_same is evaluated at compile-time and returns a boolean. Consequently, the implicit conversion from bool in (3) and (4) is possible but not the commented out conversions from int and a C-string.
You are right when you argue that a conditionally explicit constructor would be possible with SFINAE. But honestly, I don't like the corresponding SFINAE using constructor, because it would take me a few lines to explain it. Additionally, I only get it right after the third try.
template <typename T, std::enable_if_t<std::is_same_v<std::decay_t<T>, bool>, bool> = true>
MyBool(T&& t) {
std::cout << typeid(t).name() << std::endl;
}
I think I should add a few explaining words. std::enable_if is a convenient way to use SFINAE. SFINAE stands for Substitution Failure Is Not An Error and applies during overload resolution of a function template. It means when substituting the template parameter fails, the specialisation is discarded from the overload set but cause no compiler error. This exactly happens in this concrete case. The specialization is discarded if std::is_same_v<std::decay_t<T>, bool> evaluates to false. std::decay<T> applies conversions to T such as removing const, volatile or a reference from T. std::decay_t<T> is a convenient syntax for std::decay<T>::type. The same holds for std::is_same_v<T, bool> which is short for std::is_same<T, bool>::value.
As my German reader pre alpha pointed out: the constructor using SFINAE is way too greedy. It disables all non-bool constructors.
Beside my longish explanation, there is an additional argument that speaks against SFINAE and for a conditionally explicit constructor: performance. Simon Brand pointed out in his post "C++20's Conditionally Explicit Constructors", that explicit(bool) made the template instantiation for Visual Studio 2019 about 15% faster compared to SFINAE.
With C++20, additional non-type template parameters are.
With C++20, floating-points and classes with constexpr constructors are supported as non-types.
C++ supports non-types as template parameters. Essentially non-types could be
When I ask in the students in my class if they ever used a non-type as template parameter they say: No! Of course, I answer my own tricky question and show an often-used example for non-type template parameters:
std::array<int, 5> myVec;
5 is a non-type and used as a template argument. We are just used to it. Since the first C++-standard C++98, there is a discussion in the C++ community to support floating points as a template parameter. Now, we C++20 we have it:
//, therefore, be used as a template argument (2). The same holds for the function template getDouble (3) which accepts only doubles. I want to emphasize is explicit, that each call of the function template getDouble (4) with a new argument triggers the instantiation of a new function getDouble. This means that there are two instantiations for the doubles 5.5 and 6.5 are created.
If Clang would already support this feature I could show you with C++ Insights that each instantiation for 5.5 and 6.5 creates a fully specialized function template. At least, thanks to GCC, I can show you the relevant assembler instructions with the Compiler Explorer.
The screenshot shows, that the compiler created for each template argument a function.
As templates, lambdas are also improved in various ways in C++20. My next post is about these various improvements.209
Yesterday 6041
Week 36065
Month 149955
All 10307717
Currently are 130 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
RSS feed for comments to this post | https://www.modernescpp.com/index.php/template-improvements-with-c-20 | CC-MAIN-2022-40 | en | refinedweb |
SSL_new(3) OpenSSL SSL_new(3)Powered by man-cgi (2021-06-01). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias.
NAME
SSL_dup, SSL_new, SSL_up_ref - create an SSL structure for a connection
LIBRARY
libcrypto, -lcrypto
SYNOPSIS
#include <openssl/ssl.h> SSL *SSL_dup(SSL *s); SSL *SSL_new(SSL_CTX *ctx); int SSL_up_ref(SSL *s);
DESCRIPTION
SSL_new() creates a new SSL structure which is needed to hold the data for a TLS/SSL connection. The new structure inherits the settings of the underlying context ctx: connection method, options, verification settings, timeout settings. An SSL structure is reference counted. Creating an SSL structure for the first time increments the reference count. Freeing it (using SSL_free) decrements it. When the reference count drops to zero, any memory or resources allocated to the SSL structure are freed. SSL_up_ref() increments the reference count for an existing SSL structure. The function SSL_dup() creates and returns a new SSL structure from the same SSL_CTX that was used to create s. It additionally duplicates a subset of the settings in s into the new SSL object. For SSL_dup() to work, the connection MUST be in its initial state and MUST NOT have yet started the SSL handshake. For connections that are not in their initial state SSL_dup() just increments an internal reference count and returns the same handle. It may be possible to use SSL_clear(3) to recycle an SSL handle that is not in its initial state for re-use, but this is best avoided. Instead, save and restore the session, if desired, and construct a fresh handle for each connection. The subset of settings in s that are duplicated are: any session data if configured (including the session_id_context) any tmp_dh settings set via SSL_set_tmp_dh(3), SSL_set_tmp_dh_callback(3), or SSL_set_dh_auto(3) any configured certificates, private keys or certificate chains any configured signature algorithms, or client signature algorithms any DANE settings any Options set via SSL_set_options(3) any Mode set via SSL_set_mode(3) any minimum or maximum protocol settings set via SSL_set_min_proto_version(3) or SSL_set_max_proto_version(3) (Note: Only from OpenSSL 1.1.1h and above) any Verify mode, callback or depth set via SSL_set_verify(3) or SSL_set_verify_depth(3) or any configured X509 verification parameters any msg callback or info callback set via SSL_set_msg_callback(3) or SSL_set_info_callback(3) any default password callback set via SSL_set_default_passwd_cb(3) any session id generation callback set via SSL_set_generate_session_id(3) any configured Cipher List initial accept (server) or connect (client) state the max cert list value set via SSL_set_max_cert_list(3) the read_ahead value set via SSL_set_read_ahead(3) application specific data set via SSL_set_ex_data(3) any CA list or client CA list set via SSL_set0_CA_list(3), SSL_set0_client_CA_list() or similar functions any security level settings or callbacks any configured serverinfo data any configured PSK identity hint any configured custom extensions any client certificate types configured via SSL_set1_client_certificate_types
RETURN VALUES
The following return values can occur: NULL The creation of a new SSL structure failed. Check the error stack to find out the reason. Pointer to an SSL structure The return value points to an allocated SSL structure. SSL_up_ref() returns 1 for success and 0 for failure.
SEE ALSO
SSL_free(3), SSL_clear(3), SSL_CTX_set_options(3), SSL_get_SSL_CTX(3), ssl_new(3) | https://man.netbsd.org/SSL_new.3 | CC-MAIN-2022-40 | en | refinedweb |
Apache Spark is a cluster computing framework, currently one of the most actively developed in the open-source Big Data arena. It aims at being a general engine for large-scale data processing, supporting a number of platforms for cluster management (e.g. YARN or Mesos as well as Spark native) and a variety of distributed storage systems (e.g. HDFS or Amazon S3).
More interestingly, at least from a developer’s perspective, it supports a number of programming languages. Since the latest version 1.4 (June 2015), Spark supports R and Python 3 (to complement the previously available support for Java, Scala and Python 2).
This article is a brief introduction on how to use Spark on Python 3.
Quick Start
After downloading a binary version of Spark 1.4, we can extract it in a custom folder, e.g. ~/apps/spark, which we’ll call $SPARK_HOME:
export SPARK_HOME=~/apps/spark
This folder contains several Spark commands (in $SPARK_HOME/bin) as well as examples of code (in $SPARK_HOME/examples/src/main/YOUR-LANGUAGE).
We can run Spark with Python in two ways: using the interactive shell, or submitting a standalone application.
Let’s start with the interactive shell, by running this command:
$SPARK_HOME/bin/pyspark
You will get several messages on the screen while the shell is loading, and at the end you should see the Spark banner:
Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 1.4.0 /_/ Using Python version 2.7.5 (default, Mar 9 2014 22:15:05) SparkContext available as sc, HiveContext available as sqlContext. >>>
The >>> prompt is the usual Python prompt, as effectively we are using a Python interactive shell.
SparkContext and HiveContext are Spark concepts that we’ll briefly explain below. The interactive shell is telling us that these two contexts have been initialised and are available as sc and sqlContext in this session. The shell is also telling us that we’re using Python 2.7!?
But I want to use Python 3!
Long story short, Python 2 is still the default option in Spark, which you can see if you open the pyspark script with an editor (it’s a shell script). You can simply override this behaviour by setting an environment variable:
export PYSPARK_PYTHON=python3
Once you re-run the interactive shell, the Spark banner should be updated to reflect your version of Python 3.
Some Basic Spark Concepts
Two Spark concepts have already been mentioned above:
- SparkContext: it’s an object that represents a connection to a computing cluster – an application will access Spark through this context
- HiveContext: it’s an instance of the Spark SQL engine, that integrates data stored in Hive (not used in this article)
Another core concept in Spark is the Resilient Distributed Dataset (RDD), an immutable distributed collection of objects. Each RDD is split into partitions, which might be processed on different nodes of a cluster.
RDDs can be loaded from external sources, e.g. from text files, or can be transformed into new RDDs.
There are two types of operation that can be performed over a RDD:
- a transformation will leave the original RDD intact and create a new one (RDD are immutable); an example of transformation is the use of a filter
- an action will compute a result based on the RDD, e.g. counting the number of lines in a RDD
Running an Example Application
Running the interactive shell can be useful for interactive analysis, but sometimes you need to launch a batch job, i.e. you need to submit a stand-alone application.
Consider the following code and save it as line_count.py:
from pyspark import SparkContext import sys if __name__ == '__main__': fname = sys.argv[1] search1 = sys.argv[2].lower() search2 = sys.argv[3].lower() sc = SparkContext("local", appName="Line Count") data = sc.textFile(fname) # Transformations filtered_data1 = data.filter(lambda s: search1 in s.lower()) filtered_data2 = data.filter(lambda s: search2 in s.lower()) # Actions num1 = filtered_data1.count() num2 = filtered_data2.count() print('Lines with "%s": %i, lines with "%s": %i' % (search1, num1, search2, num2))
The application will take three parameters from the command line (via sys.argv), namely a file name and two search terms.
The sc variable contains the SparkContext, initialised as local context (we’re not using a cluster).
The data variable is the RDD, loaded from an external resource (the aforementioned file).
What follows the data import is a series of transformations and actions. The basic idea is to simply count how many lines contain the given search terms.
For this example, I’m using a data-set of tweets downloaded for a previous article, stored in data.json one tweet per line.
I can now launch (submit) the application with:
$SPARK_HOME/bin/spark-submit line_count.py data.json \#ita \#eng
Within a lot of debugging information, the application will print out the final count:
Lines with "#ita": 1339, lines with "#eng": 2278
Notice the use of the backslash from the command-line, because we need to escape the # symbol: effectively the search terms are #ita and #eng.
Also notice that we don’t have information about repeated occurrences of the search terms, nor about partial matches (e.g. “#eng” will also match “#england”, etc.): this example just showcases the use of transformations and actions.
Summary
Spark now supports Python 3 :) | https://marcobonzanini.com/2015/07/ | CC-MAIN-2022-40 | en | refinedweb |
How to Monitor Your Scrapy Spiders?
For anyone who has been in web scraping for a while, you know that if there is anything certain in web scraping that just because your scrapers work today doesn’t mean they will work tomorrow.
From day to day, your scrapers can break or their performance degrade for a whole host of reasons:
- The HTML structure of the target site can change.
- The target site can change their anti-bot countermeasures.
- Your proxy network can degrade or go down.
- Or something can go wrong on your server.
Because of this it is very important for you to have a reliable and effective way for you to monitor your scrapers in production, conduct health checks and get alerts when the performance of your spider drops.
In this guide, we will go through the 4 popular options to monitor your scrapers:
#1: Scrapy Logs & Stats
Out of the box, Scrapy boasts by far the best logging and stats functionality of any web scraping library or framework out there.
2021-12-17 17:02:25 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1330,
'downloader/request_count': 5,
'downloader/request_method_count/GET': 5,
'downloader/response_bytes': 11551,
'downloader/response_count': 5,
'downloader/response_status_count/200': 5,
'elapsed_time_seconds': 2.600152,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2021, 12, 17, 16, 2, 22, 118835),
'httpcompression/response_bytes': 55120,
'httpcompression/response_count': 5,
'item_scraped_count': 50,
'log_count/INFO': 10,
'response_received_count': 5,
'scheduler/dequeued': 5,
'scheduler/dequeued/memory': 5,
'scheduler/enqueued': 5,
'scheduler/enqueued/memory': 5,
'start_time': datetime.datetime(2021, 12, 17, 16, 2, 19, 518683)}
2021-12-17 17:02:25 [scrapy.core.engine] INFO: Spider closed (finished)
Whereas most other scraping libraries and frameworks focus solely on making requests and parsing the responses, Scrapy has a whole logging and stats layer under the hood that tracks your spiders in real-time. Making it really easy to test and debug your spiders when developing them.
You can easily customise the logging levels, and add more stats to the default Scrapy stats in spiders with a couple lines of code.
The major problem relying solely on using this approach to monitoring your scrapers is that it quickly becomes impractical and cumbersome in production. Especially when you have multiple spiders running every day across multiple servers.
The check the health of your scraping jobs you will need to store these logs, and either periodically SSH into the server to view them or setup a custom log exporting system so you can view them in a central user interface. More on this later.
Summary
Using Scrapy's built-in logging and stats functionality is great during development, but when running scrapers in production you should look to use a better monitoring setup.
Pros
- Setup right out of the box, and very light weight.
- Easy to customise so it to logs more stats.
- Great for local testing and the development phase.
Cons
- No dashboard functionality, so you need to setup your own system to export your logs and display them.
- No historical comparison capabilities within jobs.
- No inbuilt health check functionality.
- Cumbersome to rely solely on when in production.
#2: ScrapeOps Extension
ScrapeOps is a monitoring and alerting tool dedicated to web scraping. With a simple 30 second install ScrapeOps gives you all the monitoring, alerting, scheduling and data validation functionality you need for web scraping straight out of the box.
Live demo here: ScrapeOps Demo
.
Summary
ScrapeOps is a powerful web scraping monitoring tool, that gives you all the monitoring, alerting, scheduling and data validation functionality you need for web scraping straight out of the box.
Pros
- Free unlimited community plan.
- Simple 30 second install, gives you advanced job monitoring, health checks and alerts straight out of the box.
- Job scheduling and management functionality so you can manage and monitor your scrapers from one dashboard.
- Customer support team, available to help you get setup and add new features.
Cons
- Currently, less customisable than Spidermon or other log management tools. (Will be soon!)
#3: Spidermon Extension
Spidermon is an open-source monitoring extension for Scrapy. When integrated it allows you to set up custom monitors that can run at the start, end or periodically during your scrape, and alert you via your chosen communication method.
This is a very powerful tool as it allows you to create custom monitors for each of your Spiders that can validate each Item scraped with your own unit tests.
For example, you can make sure a required field has been scraped, that a url field actually contains a valid url, or have it double check that scraped price is actually a number and doesn’t include any currency signs or special characters.
from schematics.models import Model
from schematics.types import URLType, StringType, ListType
class ProductItem(Model):
url = URLType(required=True)
name = StringType(required=True)
price = DecimalType(required=True)
features = ListType(StringType)
image_url = URLType()
However, the two major drawbacks with Spidermon is the fact that:
#1 - No Dashboard or User Interface
Spidermon doesn’t have any dashboard or user interface where you can see the output of your monitors.
The output of your Spidermon monitors are just added to your log files and Scrapy stats, so you will either need to view each spider log to check your scrapers performance or setup a custom system to extract this log data and display it in your own custom dashboard.
#2 - Upfront Setup Time
Unlike, ScrapeOps with Spidermon you will have to spend a bit of upfront time to create the monitors you need for each spider and integrate them into your Scrapy projects.
Spidermon does include some out-of-the-box monitors, however, you will still need to activate them and define the failure thresholds for every spider.
Features
Once setup Spidermon can:
- 🕵️♂️ Monitor - Automatically, monitor all your scrapers with the defined monitors.
- 💯 Data Quality - Validate the field coverage each of the Items you've defined unit tests for.
- 📉 Periodic/Finished Health Checks - At periodic intervals or at job finish, you can configure Spidermon to check the health of your job versus pre-set thresholds.
- ⏰ Alerts - Alert you via email, Slack, etc. if any of your jobs are unhealthy.
Job stats tracked out of the box include:
- ✅ Pages Scraped
- ✅ Items Scraped
- ✅ Item Field Coverage
- ✅ Runtimes
- ✅ Errors & Warnings
You can also track more stats if you customise your scrapers to log them and have spidermon monitor them.
Integration will also need to build your custom monitors and add each of them to your
settings.py file. Here is a simple example of how to setup a monitor that will check the number of items scraped at the end of the job versus a fixed threshold.
First we create a custom monitor in a monitors.py file within our Scrapy project:
# my_project/monitors.py
from spidermon import Monitor, MonitorSuite, monitors
@monitors.name('Item count')
class ItemCountMonitor(Monitor):
@monitors.name('Minimum number of items')
def test_minimum_number_of_items(self):
item_extracted = getattr(
self.data.stats, 'item_scraped_count', 0)
minimum_threshold = 10
msg = 'Extracted less than {} items'.format(
minimum_threshold)
self.assertTrue(
item_extracted >= minimum_threshold, msg=msg
)
class SpiderCloseMonitorSuite(MonitorSuite):
monitors = [
ItemCountMonitor,
]
Then we add this to monitor to our
settings.py file so that Spidermon will run it at the end of every job.
## settings.py
## Enable Spidermon Monitor
SPIDERMON_SPIDER_CLOSE_MONITORS = (
'my_project.monitors.SpiderCloseMonitorSuite',
)
This monitor will then run at the end of every job and output the result in your logs file. Example of monitor failing its tests:
INFO: [Spidermon] -------------------- MONITORS --------------------
INFO: [Spidermon] Item count/Minimum number of items... FAIL
INFO: [Spidermon] --------------------------------------------------
ERROR: [Spidermon]
====================================================================
FAIL: Item count/Minimum number of items
--------------------------------------------------------------------
Traceback (most recent call last):
File "/tutorial/monitors.py",
line 17, in test_minimum_number_of_items
item_extracted >= minimum_threshold, msg=msg
AssertionError: False is not true : Extracted less than 10 items
INFO: [Spidermon] 1 monitor in 0.001s
INFO: [Spidermon] FAILED (failures=1)
INFO: [Spidermon] ---------------- FINISHED ACTIONS ----------------
INFO: [Spidermon] --------------------------------------------------
INFO: [Spidermon] 0 actions in 0.000s
INFO: [Spidermon] OK
INFO: [Spidermon] ----------------- PASSED ACTIONS -----------------
INFO: [Spidermon] --------------------------------------------------
INFO: [Spidermon] 0 actions in 0.000s
INFO: [Spidermon] OK
INFO: [Spidermon] ----------------- FAILED ACTIONS -----------------
INFO: [Spidermon] --------------------------------------------------
INFO: [Spidermon] 0 actions in 0.000s
INFO: [Spidermon] OK
If you would like a more detailed explanation of how to use Spidermon, you can check out our Complete Spidermon Guide here or the offical documentation here.
Summary
Spidermon is a great option for anyone who is wants to take their scrapers to the next level and integrate a highly customisable monitoring solution.
Pros
- Open-source. Developed by core Scrapy developers.
- Stable and battle tested. Used internally by Zyte developers.
- Offers the ability to set custom item validation rules on every Item being scraped.
Cons
- No dashboard functionality, so you need to build your own system to extract the Spidermon stats to a dashboard.
- Need to do a decent bit of customisation in your Scrapy projects to get the spider monitors, alerts, etc. setup for each spider.
#4: Generic Logging & Monitoring Tools
Another option, is use any of the many active monitoring or logging platforms available, like DataDog, Logz.io, LogDNA, Sentry, etc.
These tools boast a huge range of functionality and features that allow you to graph, filter, aggregate your log data in whatever way best suits your requirements.
However, although that can be used for monitoring your spiders, you will have to do a lot of customisation work to setup the dashboards, monitors, alerts like you would get with ScrapeOps or Spidermon.
Plus, because with most of these tools you will need to ingest all your log data to power the graphs, monitors, etc. they will likely be a lot more expensive than using ScrapeOps or Spidermon as they charge based on much data they ingest and how long they retain it for.
Summary
If you have a very unique web scraping stack with a complicated ETL pipeline, then customising one of the big logging tools to your requirements might be a good option.
Pros
- Lots of feature rich logging tools to choose from.
- Can integrate with your other logging stack if you have on.
- Highly customisable. If you can dream it, then you can likely build it.
Cons
- Will need to create a custom logging setup to properly track your jobs.
- No job management or scheduling capabilities.
- Can get expensive when doing large scale scraping.
More Scrapy Tutorials
That's it for all the ways you can monitor your Scrapy spiders. If you would like to learn more about Scrapy, then be sure to check out The Scrapy Playbook. | https://scrapeops.io/python-scrapy-playbook/how-to-monitor-scrapy-spiders/ | CC-MAIN-2022-40 | en | refinedweb |
Static flexible, serverless database delivered as an API that completely eliminates operational overhead such as capacity planning, data replication, and scheduled maintenance. Fauna allows you to model your data as documents, making it a natural fit for web applications written with React. Although you can access Fauna directly via a JavaScript driver, this requires a custom implementation for each client that connects to your database. By placing your Fauna database behind an API, you can enable any authorized client to connect, regardless of the programming language.
Netlify Functions allow you to build scalable, dynamic applications by deploying server-side code that works as API endpoints. In this tutorial, you build a serverless application using React, Netlify Functions, and Fauna. You learn the basics of storing and retrieving your data with Fauna. You create and deploy Netlify Functions to access your data in Fauna securely. Finally, you deploy your React application to Netlify.
Getting started with FaunaGetting started with Fauna
Fauna is a distributed, strongly consistent OLTP NoSQL serverless database that is ACID-compliant and offers a multi-model interface. Fauna also supports document, relational, graph, and temporal data sets from a single query. First, we will start by creating a database in the Fauna console by selecting the Database tab and clicking on the Create Database button.
Next, you will need to create a Collection. For this, you will need to select a database, and under the Collections tab, click on Create Collection.
Fauna uses a particular structure when it comes to persisting data. The design consists of attributes like the example below.
{ "ref": Ref(Collection("avengers"), "299221087899615749"), "ts": 1623215668240000, "data": { "id": "db7bd11d-29c5-4877-b30d-dfc4dfb2b90e", "name": "Captain America", "power": "High Strength", "description": "Shield" } }
Notice that Fauna keeps a
ref column which is a unique identifier used to identify a particular document. The
ts attribute is a timestamp to determine the time of creating the record and the
data attribute responsible for the data.
Why creating an index is importantWhy creating an index is important
Next, let’s create two indexes for our
avengers collection. This will be pretty valuable in the latter part of the project. You can create an index from the Index tab or from the Shell tab, which provides a console to execute scripts. Fauna supports two types of querying techniques: FQL (Fauna’s Query language) and GraphQL. FQL operates based on the schema of Fauna, which includes documents, collections, indexes, sets, and databases.
Let’s create the indexes from the shell.
This command will create an index on the Collection, which will create an index by the
id field inside the data object. This index will return a ref of the data object. Next, let’s create another index for the name attribute and name it
avenger_by_name.
Creating a server keyCreating a server key
To create a server key, we need to navigate the Security tab and click on the New Key button. This section will prompt you to create a key for a selected database and the user’s role.
Getting started with Netlify functions and ReactGetting started with Netlify functions and React
In this section, we’ll see how we create Netlify functions with React. We will be using create-react-app to create the react app.
npx create-react-app avengers-faunadb
After creating the react app, let’s install some dependencies, including Fauna and Netlify dependencies.
yarn add axios bootstrap node-sass uuid faunadb react-netlify-identity react-netlify-identity-widget
Now let’s create our first Netlfiy function. To make the functions, first, we need to install Netlifiy CLI globally.
npm install netlify-cli -g
Now that the CLI is installed, let’s create a
.env file on our project root with the following fields.
FAUNADB_SERVER_SECRET= <FaunaDB secret key> REACT_APP_NETLIFY= <Netlify app url>
Next, Let’s see how we can start with creating netlify functions. For this, we will need to create a directory in our project root called
functions and a file called
netlify.toml, which will be responsible for maintaining configurations for our Netlify project. This file defines our function’s directory, build directory, and commands to execute.
[build] command = "npm run build" functions = "functions/" publish = "build" [[redirects]] from = "/api/*" to = "/.netlify/functions/:splat" status = 200 force = true
We will do some additional configuration for the Netlify configuration file, like in the redirection section in this example. Notice that we are changing the default path of the Netlify function of
/.netlify/** to
/api/. This configuration is mainly for the improvement of the look and field of the API URL. So to trigger or call our function, we can use the path:
…instead of:
Next, let’s create our Netlify function in the
functions directory. But, first, let’s make a connection file for Fauna called
util/connections.js, returning a Fauna connection object.
const faunadb = require('faunadb'); const q = faunadb.query const clientQuery = new faunadb.Client({ secret: process.env.FAUNADB_SERVER_SECRET, }); module.exports = { clientQuery, q };
Next, let’s create a helper function checking for reference and returning since we will need to parse the data on several occasions throughout the application. This file will be
util/helper.js.
const responseObj = (statusCode, data) => { return { statusCode: statusCode, headers: { /* Required for CORS support to work */ "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Headers": "*", "Access-Control-Allow-Methods": "GET, POST, OPTIONS", }, body: JSON.stringify(data) }; }; const requestObj = (data) => { return JSON.parse(data); } module.exports = { responseObj: responseObj, requestObj: requestObj }
Notice that the above helper functions handle the CORS issues, stringifying and parsing of JSON data. Let’s create our first function,
getAvengers, which will return all the data.
const { responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection'); exports.handler = async (event, context) => { try { let avengers = await clientQuery.query( q.Map( q.Paginate(q.Documents(q.Collection('avengers'))), q.Lambda(x => q.Get(x)) ) ) return responseObj(200, avengers) } catch (error) { console.log(error) return responseObj(500, error); } };
In the above code example, you can see that we have used several FQL commands like Map, Paginate, Lamda. The Map key is used to iterate through the array, and it takes two arguments: an Array and Lambda. We have passed the Paginate for the first parameter, which will check for reference and return a page of results (an array). Next, we used a Lamda statement, an anonymous function that is quite similar to an anonymous arrow function in ES6.
Next, Let’s create our function
AddAvenger responsible for creating/inserting data into the Collection.
const { requestObj, responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection'); exports.handler = async (event, context) => { let data = requestObj(event.body); try { let avenger = await clientQuery.query( q.Create( q.Collection('avengers'), { data: { id: data.id, name: data.name, power: data.power, description: data.description } } ) ); return responseObj(200, avenger) } catch (error) { console.log(error) return responseObj(500, error); } };
To save data for a particular collection, we will have to pass, or data to the
data:{} object like in the above code example. Then we need to pass it to the Create function and point it to the collection you want and the data. So, let’s run our code and see how it works through the
netlify dev command.
Let’s trigger the GetAvengers function through the browser through the URL.
The above function will fetch the avenger object by the
name property searching from the
avenger_by_name index. But, first, let’s invoke the
GetAvengerByName function through a Netlify function. For that, let’s create a function called
SearchAvenger.
const { responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection'); exports.handler = async (event, context) => { const { queryStringParameters: { name }, } = event; try { let avenger = await clientQuery.query( q.Call(q.Function("GetAvengerByName"), [name]) ); return responseObj(200, avenger) } catch (error) { console.log(error) return responseObj(500, error); } };
Notice that the
Call function takes two arguments where the first parameter will be the reference for the FQL function that we created and the data that we need to pass to the function.
Calling the Netlify function through ReactCalling the Netlify function through React
Now that several functions are available let’s consume those functions through React. Since the functions are REST APIs, let’s consume them via Axios, and for state management, let’s use React’s Context API. Let’s start with the Application context called
AppContext.js.
import { createContext, useReducer } from "react"; import AppReducer from "./AppReducer" const initialState = { isEditing: false, avenger: { name: '', description: '', power: '' }, avengers: [], user: null, isLoggedIn: false }; export const AppContext = createContext(initialState); export const AppContextProvider = ({ children }) => { const [state, dispatch] = useReducer(AppReducer, initialState); const login = (data) => { dispatch({ type: 'LOGIN', payload: data }) } const logout = (data) => { dispatch({ type: 'LOGOUT', payload: data }) } const getAvenger = (data) => { dispatch({ type: 'GET_AVENGER', payload: data }) } const updateAvenger = (data) => { dispatch({ type: 'UPDATE_AVENGER', payload: data }) } const clearAvenger = (data) => { dispatch({ type: 'CLEAR_AVENGER', payload: data }) } const selectAvenger = (data) => { dispatch({ type: 'SELECT_AVENGER', payload: data }) } const getAvengers = (data) => { dispatch({ type: 'GET_AVENGERS', payload: data }) } const createAvenger = (data) => { dispatch({ type: 'CREATE_AVENGER', payload: data }) } const deleteAvengers = (data) => { dispatch({ type: 'DELETE_AVENGER', payload: data }) } return <AppContext.Provider value={{ ...state, login, logout, selectAvenger, updateAvenger, clearAvenger, getAvenger, getAvengers, createAvenger, deleteAvengers }}>{children}</AppContext.Provider> } export default AppContextProvider;
Let’s create the Reducers for this context in the
AppReducer.js file, Which will consist of a reducer function for each operation in the application context.
const updateItem = (avengers, data) => { let avenger = avengers.find((avenger) => avenger.id === data.id); let updatedAvenger = { ...avenger, ...data }; let avengerIndex = avengers.findIndex((avenger) => avenger.id === data.id); return [ ...avengers.slice(0, avengerIndex), updatedAvenger, ...avengers.slice(++avengerIndex), ]; } const deleteItem = (avengers, id) => { return avengers.filter((avenger) => avenger.data.id !== id) } const AppReducer = (state, action) => { switch (action.type) { case 'SELECT_AVENGER': return { ...state, isEditing: true, avenger: action.payload } case 'CLEAR_AVENGER': return { ...state, isEditing: false, avenger: { name: '', description: '', power: '' } } case 'UPDATE_AVENGER': return { ...state, isEditing: false, avengers: updateItem(state.avengers, action.payload) } case 'GET_AVENGER': return { ...state, avenger: action.payload.data } case 'GET_AVENGERS': return { ...state, avengers: Array.isArray(action.payload && action.payload.data) ? action.payload.data : [{ ...action.payload }] }; case 'CREATE_AVENGER': return { ...state, avengers: [{ data: action.payload }, ...state.avengers] }; case 'DELETE_AVENGER': return { ...state, avengers: deleteItem(state.avengers, action.payload) }; case 'LOGIN': return { ...state, user: action.payload, isLoggedIn: true }; case 'LOGOUT': return { ...state, user: null, isLoggedIn: false }; default: return state } } export default AppReducer;
Since the application context is now available, we can fetch data from the Netlify functions that we have created and persist them in our application context. So let’s see how to call one of these functions.
const { avengers, getAvengers } = useContext(AppContext); const GetAvengers = async () => { let { data } = await axios.get('/api/GetAvengers); getAvengers(data) }
To get the data to the application contexts, let’s import the function
getAvengers from our application context and pass the data fetched by the get call. This function will call the reducer function, which will keep the data in the context. To access the context, we can use the attribute called
avengers. Next, let’s see how we could save data on the avengers collection.
const { createAvenger } = useContext(AppContext); const CreateAvenger = async (e) => { e.preventDefault(); let new_avenger = { id: uuid(), ...newAvenger } await axios.post('/api/AddAvenger', new_avenger); clear(); createAvenger(new_avenger) }
The above
newAvenger object is the state object which will keep the form data. Notice that we pass a new id of type
uuid to each of our documents. Thus, when the data is saved in Fauna, We will be using the
createAvenger function in the application context to save the data in our context. Similarly, we can invoke all the netlify functions with CRUD operations like this via Axios.
How to deploy the application to NetlifyHow to deploy the application to Netlify
Now that we have a working application, we can deploy this app to Netlify. There are several ways that we can deploy this application:
- Connecting and deploying the application through GitHub
- Deploying the application through the Netlify CLI
Using the CLI will prompt you to enter specific details and selections, and the CLI will handle the rest. But in this example, we will be deploying the application through Github. So first, let’s log in to the Netlify dashboard and click on New Site from Git button. Next, It will prompt you to select the Repo you need to deploy and the configurations for your site like build command, build folder, etc.
How to authenticate and authorize functions by Netlify IdentityHow to authenticate and authorize functions by Netlify Identity
Netlify Identity provides a full suite of authentication functionality to your application which will help us to manage authenticated users throughout the application. Netlify Identity can be integrated easily into the application without using any other 3rd party service and libraries. To enable Netlify Identity, we need to login into our Neltify dashboard, and under our deployed site, we need to move to the Identity tab and allow the identity feature.
Enabling Identity will provide a link to your netlify identity. You will have to copy that URL and add it to the .env file of your application for
REACT_APP_NETLIFY. Next, We need to add the Netlify Identity to our React application through the netlify-identity-widget and the Netlify functions. But, first, let’s add the
REACT_APP_NETLIFY property for the Identity Context Provider component in the
index.js file.
import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import "react-netlify-identity-widget/styles.css" import 'bootstrap/dist/css/bootstrap.css'; import App from './App'; import { IdentityContextProvider } from "react-netlify-identity-widget" const url = process.env.REACT_APP_NETLIFY; ReactDOM.render( <IdentityContextProvider url={url}> <App /> </IdentityContextProvider>, document.getElementById('root') );
This component is the Navigation bar that we use in this application. This component will be on top of all the other components to be the ideal place to handle the authentication. This react-netlify-identity-widget will add another component that will handle the user signI= in and sign up.
Next, let’s use the Identity in our Netlify functions. Identity will introduce some minor modifications to our functions, like the below function
GetAvenger.
const { responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection'); exports.handler = async (event, context) => { if (context.clientContext.user) { const { queryStringParameters: { id }, } = event; try { const avenger = await clientQuery.query( q.Get( q.Match(q.Index('avenger_by_id'), id) ) ); return responseObj(200, avenger) } catch (error) { console.log(error) return responseObj(500, error); } } else { return responseObj(401, 'Unauthorized'); } };
The context of each request will consist of a property called
clientContext, which will consist of authenticated user details. In the above example, we use a simple if condition to check the user context.
To get the
clientContext in each of our requests, we need to pass the user token through the Authorization Headers.
const { user } = useIdentityContext(); const GetAvenger = async (id) => { let { data } = await axios.get('/api/GetAvenger/?id=' + id, user && { headers: { Authorization: `Bearer ${user.token.access_token}` } }); getAvenger(data) }
This user token will be available in the user context once logged in to the application through the netlify identity widget.
As you can see, Netlify functions and Fauna look to be a promising duo for building serverless applications. You can follow this GitHub repo for the complete code and refer to this URL for the working demo.
ConclusionConclusion
In conclusion, Fauna and Netlify look to be a promising duo for building serverless applications. Netlify also provides the flexibility to extend its functionality through the plugins to enhance the experience. The pricing plan with pay as you go is ideal for developers to get started with fauna. Fauna is extremely fast, and it auto-scales so that developers will have the time to focus on their development more than ever. Fauna can handle complex database operations where you would find in Relational, Document, Graph, Temporal databases. Fauna Driver support all the major languages such as Android, C#, Go, Java, JavaScript, Python, Ruby, Scala, and Swift. With all these excellent features, Fauna looks to be one of the best Serverless databases. For more information, go through Fauna documentation.
Awesome write up! I’ll be looking into Fauna and Netlify Functions for future projects! | https://css-tricks.com/accessing-data-netlify-functions-react/ | CC-MAIN-2022-40 | en | refinedweb |
Semaphore concept
A semaphore is essentially a counter ( Global variables are not set because processes are independent of each other , And it doesn't have to be able to see , Seeing it doesn't guarantee ++ The reference count is an atomic operation ), It is used to read shared data objects by multiple processes , It's different from pipes , It's not primarily for the purpose of transmitting data , It's mainly used to protect shared resources ( Semaphores are also critical resources ), It makes the resource only available to one process at a time .
Semaphore classification
For a variety of reasons ,Linux There are a variety of semaphore implementation mechanisms , It can be used in different situations , The classification is as follows :
User semaphores mainly run in user mode , For example, a file must be accessed between processes , Then only the process that gets the semaphore can open the file , Other processes go to sleep , We can also look at the value of the current semaphore , To determine whether to enter the critical zone .
Kernel semaphores mainly run on Linux kernel , It mainly implements the mutual exclusion of kernel critical resources , For example, a device can only be opened by a process , Failure to open a routine on the device will cause the user space process to sleep .
POSIX Famous semaphore
Mainly applied to threads .
sem_t *sem_open(const char *name, int oflag, mode_t mode, int val);int sem_wait(sem_t *sem);int sem_trywait(sem_t *sem);int sem_post(sem_t *sem);int sem_close(sem_t *sem);int sem_unlink(const char *name);
Every open We need to be in all positions close and unlink, But only in the end unlink take effect
POSIX Unknown semaphore
Mainly applied to threads .
#include<semaphore.h>sem_t sem;int sem_init(sem_t *sem, int pshared, unsigned int val); //pshared by 0 Then shared between threads ,pshared by 1 Then the parent-child process shares int sem_wait(sem_t *sem); // Blocking int sem_trywait(sem_t *sem); // Non blocking int sem_post(sem_t *sem);int sem_destroy(sem_t *sem); Sharing between processes is sem Must be placed in shared memory area (mmap, shm_open, shmget), Global variables of the parent process 、 Pile up 、 Stack storage doesn't work
Kernel semaphore :
#include<asm/semaphore.h>void sema_init(struct semaphore *sem, int val);void down(struct semaphore *sem); // Can sleep int down_interruptible(struct semaphore *sem); // interruptible int down_trylock(struct semaphore *sem); //m Non blocking void up(struct semaphore *sem);
In addition, there is another way to classify semaphores
Binary semaphore (binary semaphore) And counting semaphores (counting semaphore).
Binary semaphore :
seeing the name of a thing one thinks of its function , There are only two values 0 or 1, Equivalent to mutex , The duty of 1 Resources are available ; And on duty is 0 when , Resources are locked in , The process is blocked and cannot continue .
Count the semaphore :
Its value is in the 0 The semaphore between a limit value and a limit value .
How semaphores work
Semaphores can only do two kinds of operations, waiting and sending signals , Semaphore operations sum up , Its core is PV operation ,P(sv) and V(sv), Their behavior is like this :
(1)P(sv):
If sv The value of is greater than zero , Just subtract it 1; If its value is zero , Suspend the execution of the process
(2)V(sv):
If there are other processes waiting sv And is suspended , Just let it go back to work , If there is no process due to waiting sv And suspend , Just add 1.
In the semaphore PV All operations are atomic operations ( Because it needs to protect critical resources )
notes : Atomic manipulation : The operation of a single instruction is called atomic , The execution of a single instruction is not interrupted
System V IPC
Explain System V Before semaphore , First understand what is System V IPC.
System V IPC There are three types of IPC Collectively known as System V IPC:
- System V Semaphore
- System V Message queue
- System V Shared memory
System V IPC There are some similarities in accessing their functions and the information the kernel maintains for them , It mainly includes :
- IPC Key sum ftok function
- ipc_perm structure
- User access rights specified when creating or opening
- ipcs and ipcrm command
The following table summarizes all System V IPC function .
IPC Key sum ftok function
Three types of System V IPC All use IPC Key as their logo ,IPC The key is a key_t Type integer , The type in sys/types.h In the definition of .
IPC The bond is usually made up of ftok The function gives , This function takes an existing pathname pathname And a non 0 Integers id Combine into a key_t value , namely IPC key .
#include <sys/ipc.h>// Successfully returns IPC key , Failure to return -1key_t ftok(const char *pathname, int id);
Parameter description :
- pathname It must be stable during program operation , Can't create and delete repeatedly
- id Not for 0, It can be positive or negative
ipc_perm structure
The kernel gives each IPC Object maintains an information structure , namely struct ipc_perm structure , The structure and System V IPC The constant value of a function is defined in sys/ipc.h Header file .
struct ipc_perm{uid_t uid; //owner's user idgid_t gid; //owner's group iduid_t cuid; //creator's group idgid_t cgid; //creator's group idmode_t mode; //read-write permissionsulong_t seq; //slot usage sequence numberkey_t key; //IPC key};
Create and open IPC object
Create or open a IPC Object uses the corresponding xxxget function , They all have two parameters in common :
- Parameters key,key_t Type of IPC key
- Parameters oflag, Is used to specify the IPC Object's read and write permissions (ipc_perm.mode), And the choice is to create a new IPC Object or open an existing IPC object
For parameters key, There are two options for applications :
- call ftok, Pass it on pathname and id
- Appoint key by IPC_PRIVATE, This will ensure that a new 、 Unique IPC object , But this flag cannot be used to open an existing IPC object , It can only be new
For parameters oflag, As mentioned above , It contains read and write permissions 、 Create or open these two information :
- You can specify IPC_CREAT sign , Its meaning and Posix IPC Of O_CREAT equally
- You can also set to the constant value shown in the following table to specify read and write permissions
ipcs and ipcrm command
because System V IPC The three types of are not identified by file system pathnames , So I can't use ls and rm Command to view and delete them ipcs and ipcrm Used to view and delete the System V IPC
usage : ipcs -asmq -tclup ipcs [-s -m -q] -i idipcs -h for help.
usage: ipcrm [ [-q msqid] [-m shmid] [-s semid] [-Q msgkey] [-M shmkey] [-S semkey] ... ]
SYSTEM V Semaphore
SystemV The semaphore is not as good as Posix The semaphore is like that “ To use ”, But it's older than that , however SystemV But it's more widely used ( Especially in old systems ).
System V A semaphore is a set of counting semaphores (set of counting semaphores), Is a collection of one or more semaphores , Each of these is a counting semaphore .( notes :System V A semaphore is a set of counting semaphores ,Posix A semaphore is a single count semaphore .)
All functions share a header file
#include <sys/types.h>#include <sys/ipc.h>#include <sys/sem.h>
Create semaphores
int semget(key_t key,int nsems,int flags)// return : Signal set returned successfully ID, Error return -1
(1) The first parameter key It's a long one ( The only non-zero ), System establishment IPC Communications ( Message queue 、 Semaphores and Shared memory ) You must specify a ID value . Usually , The id Value through ftok Function to get , From kernel to identifier , To make two processes see the same signal set , Just set up key If you don't change the value .
(2) The second parameter nsem Specifies the number of semaphores required in the semaphore set , Its value is almost always 1.
(3) The third parameter flag It's a set of signs , When you want to create a new semaphore when it doesn't exist , Can be flag Set to IPC_CREAT Do bit by bit or operation with file permissions .
Set up IPC_CREAT After the sign , Even if given key It's an existing semaphore key, It doesn't make mistakes . and IPC_CREAT | IPC_EXCL You can create a new , The only semaphore , If the semaphore already exists , Return an error . Generally, we will return the previous file permissions
Delete and initialize semaphores
int semctl(int semid, int semnum, int cmd, ...);
function :
Semaphore control operation .
Parameters :
semid The set of semaphores indicating the operation ;semnum Indicates a member of the semaphore set (0,1 etc. , until nsems-1),semnum The value is only used for GETVAL,SETVAL,GETNCNT,GETZCNT,GETPID, Usually the value is 0, That's the first semaphore ;cmd: Specifies various operations on a single semaphore ,IPC_STAT,IPC_GETVAL,IPC_SETVAL,IPC_RMID;arg: Optional parameters , It depends on the third parameter cmd.
Return value :
If successful , according to cmd Different return different values ,IPC_STAT,IPC_SETVAL,IPC_RMID return 0,IPC_GETVAL Returns the current value of the semaphore ; Error return -1.
If necessary, the fourth parameter is usually set to union semnu arg; The definition is as follows
union semun{ int val; // Value used struct semid_ds *buf; //IPC_STAT、IPC_SET Cache used unsigned short *arry; //GETALL,、SETALL The array used struct seminfo *__buf; // IPC_INFO(Linux specific ) Cache used };
- (1)sem_id By semget The returned semaphore identifier
- (2)semnum Which semaphore of the current semaphore set
- (3)cmd Usually one of the following two values
SETVAL: Used to initialize a semaphore to a known value .p This value is determined by union semun Medium val Member settings , Its function is to set the semaphore before it is used for the first time .
IPC_RMID: Used to delete a semaphore identifier that no longer needs to be used , If you delete it, you don't need the default parameters , Just three parameters .
Structure
because system v Semaphores are generated along with kernel startup , We can find it in the source file sem.c see static struct ipc_ids sem_ids; It is system v The entrance to the semaphore , Therefore, it always exists in the process of system operation . The information it holds is a resource ( stay sem In this case, the semaphore set , It can also be msg,shm) Information about . Such as :
struct ipc_ids { int in_use;// Describe the number of resources allocated int max_id;/ The largest location index in use unsigned short seq;// Next assigned location serial number unsigned short seq_max;// The maximum position uses the sequence struct semaphore sem; // Protect ipc_ids The amount of signal struct ipc_id_ary nullentry;// If IPC The resource could not be initialized , be entries Fields point to pseudo data structures struct ipc_id_ary* entries;// Point to resources ipc_id_ary Pointer to data structure };
Its last element entries Point to struct ipc_id_ary Such a data structure , It has two members :
struct ipc_id_ary { int size;// Save the length value of the array struct kern_ipc_perm *p[0];// It's an array of pointers , Array length is variable , When the kernel is initialized, its value is 128};
As we can see in the picture above ,sem_ids.entries->p Point to sem_array This data structure , Why? ?
Let's look at the semaphore set sem_array This data structure :
/* */ Point to the semaphore queue struct sem_queue *sem_pending; /* pending operations to be processed */ Point to the head of the pending queue struct sem_queue **sem_pending_last; /* last pending operation */ Points to the end of the suspended queue struct sem_undo *undo; /* undo requests on this array */ On the semaphore set Cancel the request unsigned long sem_nsems; /* no. of semaphores in array */ The number of semaphores in a semaphore set };
such sem_ids.entries It's like a semaphore set sem_array It's connected , But why pass kern_ipc_perm Connection , Why not just sem_ids Point to sem_array Well , This is because the semaphore , Message queue , The mechanism of shared memory implementation is basically the same , So they all go through ipc_id_ary This data structure management , And by kern_ipc_perm, They are associated with their respective data structures . That's clear ! Let's look at kernel functions later sys_semget() How to create a semaphore set , And add it to sem_ids.entries Medium .
Change the value of the semaphore
int semop(int semid, struct sembuf *sops, size_t nops);
function :
Operation semaphore ,P,V operation
Parameters :
semid: Semaphore set identifier ;nops yes opstr The number of elements in the array , Usually the value is 1;opstr Point to an array of structures
nsops: The number of semaphores for operation , namely sops The number of structural variables , Must be greater than or equal to 1. The most common setting is that this value is equal to 1, Complete the operation of only one semaphore
sembuf Is defined as follows :
struct sembuf{ short sem_num; // Unless you use a set of semaphores , Otherwise it is 0 short sem_op; // Semaphore data that needs to be changed in one operation , through // It's usually two numbers , One is -1, namely P( wait for ) operation , // One is +1, namely V( Sending signal ) operation . short sem_flg; // Usually it is SEM_UNDO, Make the operating system track // Semaphore , And when the process terminates without releasing the semaphore , The operating system releases semaphores };
Return value :
Semaphore identifier returned successfully , Error return -1
General programming steps :
- Create a semaphore or get a semaphore that already exists in the system
1). call semget().
2). Different processes use the same semaphore key to get the same semaphore
- Initialize semaphores
1). Use semctl() Functional SETVAL operation
2). When using two-dimensional semaphores , The semaphore is usually initialized to 1
- Carry out semaphores PV operation
1). call semop() function
2). Realize the synchronization and mutual exclusion between processes
- If the semaphore is not needed , Remove from system
1). Use semctl() Functional IPC_RMID operation
example
#include <stdio.h>#include <stdlib.h>#include <string.h>#include <sys/types.h>#include <unistd.h>#include <sys/sem.h>#include <sys/ipc.h>#define USE_SYSTEMV_SEM 1#define DELAY_TIME 2union semun {int val;struct semid_ds *buf;unsigned short *array;};// Put the semaphore sem_id Set to init_valueint init_sem(int sem_id,int init_value) {union semun sem_union;sem_union.val=init_value;if (semctl(sem_id,0,SETVAL,sem_union)==-1) {perror("Sem init");exit(1);}return 0;}// Delete sem_id Semaphore int del_sem(int sem_id) {union semun sem_union;if (semctl(sem_id,0,IPC_RMID,sem_union)==-1) {perror("Sem delete");exit(1);}return 0;}// Yes sem_id perform p operation int sem_p(int sem_id) {struct sembuf sem_buf;sem_buf.sem_num=0;// Semaphore number sem_buf.sem_op=-1;//P operation sem_buf.sem_flg=SEM_UNDO;// The semaphore is not released before the system exits , The system releases automatically if (semop(sem_id,&sem_buf,1)==-1) {perror("Sem P operation");exit(1);}return 0;}// Yes sem_id perform V operation int sem_v(int sem_id) {struct sembuf sem_buf;sem_buf.sem_num=0;sem_buf.sem_op=1;//V operation sem_buf.sem_flg=SEM_UNDO;if (semop(sem_id,&sem_buf,1)==-1) {perror("Sem V operation");exit(1);}return 0;}int main() {pid_t pid;#if USE_SYSTEMV_SEMint sem_id;key_t sem_key;sem_key=ftok(".",'A');printf("sem_key=%x\n",sem_key);// With 0666 And create mode Create a semaphore , Return to sem_idsem_id=semget(sem_key,1,0666|IPC_CREAT);printf("sem_id=%x\n",sem_id);// take sem_id Set to 1init_sem(sem_id,1);#endifif ((pid=fork())<0) {perror("Fork error!\n");exit(1);} else if (pid==0) {#if USE_SYSTEMV_SEMsem_p(sem_id); // P operation #endifprintf("Child running...\n");sleep(DELAY_TIME);printf("Child %d,returned value:%d.\n",getpid(),pid);#if USE_SYSTEMV_SEMsem_v(sem_id); // V operation #endifexit(0);} else {#if USE_SYSTEMV_SEMsem_p(sem_id); // P operation #endifprintf("Parent running!\n");sleep(DELAY_TIME);printf("Parent %d,returned value:%d.\n",getpid(),pid);#if USE_SYSTEMV_SEMsem_v(sem_id); // V operation waitpid(pid,0,0);del_sem(sem_id);#endifexit(0);}}
The operation results are as follows :
| https://javamana.com/2021/04/20210416122017221o.html | CC-MAIN-2022-40 | en | refinedweb |
Is this possible to implement asp.net web api (.net framework) in angular 4?
I've followed the docs in angular.io and I've already implemented it in my machine, and it has AOT.
By the way I am using Visual Studio Enterprise 2015.
My question is how to config the startup.cs .(net framework asp.net web api) into my angular 4.
What i always saw was implement angular 4 in asp.net core.
So I can't display the data from web api to my angular 4 project.
I am not angular expert but i can provide you few helpful links, after understanding your requirements
1.How to display my angular 4 project(Index.html) in _Layout.cshtml with AOT
-> Take a look at this Url you will find complete guide to set up project(although it is angular 2, but angular 2 and 4 are almost same)
2.What is the configuration to consume my angular 4 project the web api i created, dynamically.
-> You may have to configur your project's Route, as you posted above your Projects are in same solution, you need proper routing all the details are in above link, which step by step guide, i believe you need to configure route using code below for your mvc web api project
public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}") routes.MapRoute( name: "Default", url: "{*anything}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } }
Other helpful links, which can help you
Hope this helps, if you will look carefully on First link and Second link of this comment you will get your answers for sure
Subscribe to our weekly Newsletter & Keep getting latest article/questions in your inbox weekly | https://qawithexperts.com/questions/149/aspnet-web-api-and-angular-4-using-net-framework-not-net-cor | CC-MAIN-2022-40 | en | refinedweb |
In a previous post we looked at different ways to render outputs (sprites, rectangles, lines, etc.) in the DragonRuby Game Toolkit.
The post ended by hinting at a more efficient way to render outputs instead of adding them to e.g.
args.outputs.solids or
args.outputs.sprites each tick.
This post explores the world of "static outputs"!
Static What?
First of all, we should address the most confusing part of all this.
"Static" does not mean the images don't move or change. Instead, it means that render "queue" is not cleared after every tick.
Normally, one would load up the queue each tick, like this:
def tick args # Render a black rectangle args.outputs.solids << { x: 100, y: 200, w: 300, # width h: 400 # height } end
But this is kind of wasteful. We are creating a new hash table each tick (60 ticks/second) and then throwing it away. Also each tick we are filling up the
args.outputs.solids queue and then emptying it.
Instead, why not create the hash table once, load up the queue once, and then re-use them?
That's the idea of static outputs!
There are static versions for each rendered type:
args.outputs.static_borders
args.outputs.static_labels
args.outputs.static_primitives
args.outputs.static_solids
args.outputs.static_sprites
Going Static
Starting Out
Here's an example with comments explaining what the code is doing. This "game" simply moves a square back and forth across the screen. This is the entire program!
def tick args # Initialize the x location of the square args.state.x ||= 0 # Initialize the direction/velocity args.state.direction ||= 10 # If we hit the sides, change direction if args.state.x > args.grid.right or args.state.x < args.grid.left args.state.direction = -args.state.direction end # Update the x location args.state.x += args.state.direction # Build the square square = { x: args.state.x, y: 400, w: 40, h: 40, } # Add the square to the render queue args.outputs.solids << square end
The resulting output looks like:
This example introduces
args.state. This is basically a persistent bag you can throw anything into. (For Rubyists - this is like OpenStruct.)
x and
direction are not special, they are just variables we are defining. We use
||= to initialize them because we only want to set the values on the first tick.
This example illustrates the point from above - every tick it creates a new square and adds it to the queue. The queue is emptied out and then the code starts all over again.
Seems wasteful, right?
Caching the Objects
First thing I think of is - "why not create the square once, then just update the object each tick? Does that work?" Yes! It does.
def tick args args.state.direction ||= 10 args.state.square ||= { x: 0, y: 400, w: 40, h: 40, } if args.state.square[:x] > args.grid.right or args.state.square[:x] < args.grid.left args.state.direction = -args.state.direction end args.state.square[:x] += args.state.direction args.outputs.solids << args.state.square end
In this code, we create the
square only once and then store it in
args.state.square.
Instead of having a separate
x variable, the code updates the
x property on the square directly.
This is better, but we are still updating
args.outputs.solids each tick.
Full Static
def tick args args.state.direction ||= 10 args.state.square ||= { x: 0, y: 400, w: 40, h: 40, } # On the first tick, add the square to the render queue if args.tick_count == 0 args.outputs.static_solids << args.state.square end if args.state.square[:x] > args.grid.right or args.state.square[:x] < args.grid.left args.state.direction = -args.state.direction end args.state.square[:x] += args.state.direction end
In this code, we use the fact that the first
args.tick_count is
0 to add the
square to
args.outputs.static_solids just once. It will continue to be rendered on each tick.
Performance
Intuitively, since the code is doing less, it should be faster. But does it really make a difference?
It depends on your game, how much it's doing per tick, how many sprites you are rendering, and what platform/hardware it's running on.
The examples above? Not going to see any difference using
static_solids.
But DragonRuby contains two examples that directly compare
args.outputs.sprites vs.
args.outputs.static_sprites (here and here).
In these examples, you can play with the number of "stars" rendered to see different performance. On my ancient laptop, I do not see a performance difference until around 3,000 stars.
Your mileage may vary, though!
Should I Always Use the Static Versions?
It depends! Probably not?
If your code mainly manipulates the same objects around the screen and always renders them in the same order, then using the
static_ approach might be simpler and faster.
But in many cases it might be easier to simply set up the render queues each tick, especially if the objects rendered or their ordering change regularly. Otherwise, managing the state of the rendering queues can become cumbersome. (We haven't even talked about clearing the static queues, for example.)
Some of this comes down to personal preference and how you would like to structure your code. But hopefully this post has helped explain how to use the
args.outputs.static_* methods in your game!
Top comments (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/presidentbeef/dragonruby-static-outputs-p2c | CC-MAIN-2022-40 | en | refinedweb |
Recovery Toolbox For Outlook ##VERIFIED## Crack Keygen Free
Recovery Toolbox For Outlook Crack Keygen Free
How to remove ms outlook setup password or crack?
Dec 11, 2017 · how to remove ms outlook setup password or crack v 5.11 crack keygen debfree download. Free Download
Please note that we take no responsibility for the settings and the settings for the screenshots above. This guide will help you uninstall ms outlook setup password or crack clean. Download and install the removal tool.To uninstall. The primary difference is that you can’t uninstall ms outlook setup password or crack clean. For a virtual machine instance, you’ll have to run the removal tool on the host operating system.
Download ms outlook setup password or crack v4.0.0.0 cracked keygen full version for windows at ba9c2.com.
Download free Ms outlook Setup Password or Crack – 123SoftwareCrack
hope to receive your. Project 2013 | Clean Up your Computer,. Cracked Office 2013. Free. Microsoft Office Setup Password or Crack Free Download (Office 2017).
Download and install the removal tool.No DVD-ROMs used, no pesky key cards or passwords to bypass,. You can also run the removal tool on your. If you have an Office setup serial number,.
Microsoft Office Setup Password or Crack. Office Setup Password or Crack. Best Review. Microsoft Customer Service: 800-547-7247.. Microsoft Account Password Fix.
Download the removal tool from the link provided below. Click Download to download and extract the tool. Follow the instructions provided in the tool to complete the task.
is the best and free software that has all the tools for you to fix you may be facing problems with. By downloading and using this program it will help you to repair. There’s an Outlook setup crack key generator that you can use to program a crack key for your.
12/20/2018. MS Outlook Customer Service : 800-547-7247. If you are using MS Outlook and you forgot your password or. Services: Outlook Support Desk or office@microsoft.com.
Account Password Fix. Microsoft Customer Service: 800-547-7247. The free tool will help you to fix your problem with. Office Setup Password or Crack Free Download (Office 2017).
You can use the steps below to remove ms outlook setup password or crack from your computer.
4 / 5. ‘8542787’ rated by 68 users. ‘. Other Microsoft Office Programs (Office 2010). If you want to be able to download the full version
The image is posted here by the author. The serial, product key, recovery key and support number which you download in this website is antispyware_2015-002-1-for-windows-serial-key-2017-106-,
Recovery Toolbox For Outlook 2017 Crack is a. Thanks to the recovery toolbox key, these files can be easily. When you are using such kind of the software, you can download it from free registration page.
the best way to create a free email account using outlook. Recuperare Toolbox Outlook Password 1-50. 10/2013 -. 1:21pm outlook 2013 registration code renly 2:06pm.Q:
Computing bounds on a confidence interval
I have written a very simple program that generates a random number from a normal distribution.
import random
def generate_test_tuple():
low = 2.5
high = 3.5
lower = (low-5*random.randint(0,1))*random.randint(0,1)
upper = (high+5*random.randint(0,1))*random.randint(0,1)
return lower, upper
print generate_test_tuple()
Now, I have two questions
I am supposed to compute the following bounds:
bounds = [2.5,3.5]
bounds = [2.5,3.5]
But how? All I see from this syntax is that:
bounds = (low-5*random.randint(0,1))*random.randint(0,1)
What I see is that it is just computing a random number from the lower bound to the upper bound. Could someone please show me the whole process of how we get the bounds from this?
What I think I am missing in my understanding is that I am missing the upper bound.
A:
You seem to be misunderstanding the operation of randint and random.int. randint returns an integer in a given range, while random.int returns an integer from the range specified, and both methods return a random integer.
In your case, you can use randint and random.random instead:
def generate_test_tuple():
6d1f23a050 | https://qeezi.com/advert/recovery-toolbox-for-outlook-verified-crack-keygen-free/ | CC-MAIN-2022-40 | en | refinedweb |
In this article we’re going to take a look on how to interact with the ESP32 and ESP8266 GPIOs using MicroPython. We’ll show you how to read digital and analog inputs, how to control digital outputs and how to generate PWM signals.
>
Alternatively, if you’re having trouble using uPyCraftIDE, we recommend using Thonny IDE instead: Getting Started with Thonny MicroPython (Python) IDE for ESP32 and ESP8266
If this is your first time dealing with MicroPython you may find these next tutorials useful:
- Getting Started with MicroPython on ESP32 and ESP8266
- MicroPython Programming Basics with ESP32 and ESP8266
Project Overview
With this tutorial you’ll learn how to use the ESP32 or ESP8266 GPIOs with MicroPython. You can read the separate guide for each topic:
We’ll build a simple example that works as follows:
- Read the state of a pushbutton and set the LED state accordingly – when you press the pushbutton the LED lights up.
- Read the voltage from a potentiometer and dim an LED accordingly to the shaft’s position of the potentiometer.
Schematic
The circuit for this project involves wiring two LEDs, a pushbutton, and a potentiometer. Here’s a list of all the parts needed to build the circuit:
- ESP32 or ESP8266 (read: ESP32 vs ESP8266)
- 2x LEDs
- 2x 330 Ohm resistor
- Pushbutton
- Potentiometer
- Breadboard
- Jumper wires
You can use the preceding links or go directly to MakerAdvisor.com/tools to find all the parts for your projects at the best price!
ESP32 – Schematic
Follow the next schematic diagram if you’re using an ESP32:
Note: the ESP32 supports analog reading in several GPIOs: 0, 2, 4, 12, 13, 14, 15, 25, 26, 27 32, 33, 34, 35, 36, and 39.
Recommended reading: ESP32 Pinout Reference: Which GPIO pins should you use?
ESP8266 – Schematic
Follow the next schematic diagram if you’re using an ESP8266:
Note: the ESP8266 only supports analog reading in pin ADC0 (A0).
Code
Copy the following code to the main.py file in the uPyCraft IDE.
Note: analog reading works differently in ESP32 and ESP8266. The code works right away in ESP32. To use with ESP8266, you have to uncomment and comment the lines described in the MicroPython script.
# Complete project details at # Created by Rui Santos from machine import Pin, ADC, PWM from time import sleep led = Pin(2, Pin.OUT) button = Pin(15, Pin.IN) #Configure ADC for ESP32 pot = ADC(Pin(34)) pot.width(ADC.WIDTH_10BIT) pot.atten(ADC.ATTN_11DB) #Configure ADC for ESP8266 #pot = ADC(0) led_pwm = PWM(Pin(4),5000) while True: button_state = button.value() led.value(button_state) pot_value = pot.read() led_pwm.duty(pot_value) sleep(0.1)
How the code works
Continue reading to learn on how the code works.
Importing Libraries
To interact with the GPIOs you need to import the machine module that contains classes to interact with the GPIOs. Import the Pin class to interact with the pins, the ADC class to read analog value, and the PWM class to generate PWM signals.
from machine import Pin, ADC, PWM
Import the sleep() method from the time module. The sleep() method allows you to add delays to the code.
from time import sleep
Instantiating Pins
After importing all the necessary modules, instantiate a Pin object called led on GPIO 2 that is an OUTPUT.
led = Pin(2, Pin.OUT)
The Pin object accepts the following attributes in the following order:
Pin(Pin number, pin mode, pull, value)
- Pin number refers to the GPIO we want to control;
- Pin mode can be input (IN), output (OUT) or open-drain (OPEN_DRAIN);
- The pull argument is used if we want to activate a pull up or pull down internal resistor (PULL_UP, or PULL_DOWN);
- The valuecorresponds to the GPIO state (if is is on or off): it can be 0 or 1 (True or False). Setting 1 means the GPIO is on. If we don’t pass any parameter, its state is 0 by default (that’s what we’ll do in this example).
After instantiating the led object, you need another instance of the Pin class for the pushbutton. The pushbutton is connected to GPIO 15 and it’s set as an input. So, it looks as follows:
button = Pin(15, Pin.IN)
Instantiating ADC
In the ESP32, to create an ADC object for the potentiometer on GPIO 34:
pot = ADC(Pin(34))
If you’re using an ESP8266, it only supports ADC on ADC0 (A0) pin. To instantiate an ADC object with the ESP8266:
pot = ADC(0)
The following line applies just to the ESP32. It defines that we want to be able to read voltage in full range. This means we want to read voltage from 0 to 3.3 V.
pot.atten(ADC.ATTN_11DB)
The next line means we want readings with 10 bit resolution (from 0 to 1023)
pot.width(ADC.WIDTH_10BIT)
The width() method accepts other parameters to set other resolutions:
- WIDTH_9BIT: range 0 to 511
- WIDTH_10BIT: range 0 to 1023
- WIDTH_11BIT: range 0 to 2047
- WIDTH_12BIT: range 0 to 4095
If you don’t specify the resolution, it will be 12-bit resolution by default on the ESP32.
Instantiating PWM
Then, create a PWM object called led_pwm on GPIO 4 with 5000 Hz.
led_pwm = PWM(Pin(4), 5000)
To create a PWM object, you need to pass as parameters: pin, signal’s frequency, and duty cycle.
The frequency can be a value between 0 and 78125. A frequency of 5000 Hz for an LED works just fine.
The duty cycle can be a value between 0 and 1023. In which 1023 corresponds to 100% duty cycle (full brightness), and 0 corresponds to 0% duty cycle (unlit LED).
We’ll just set the duty in the while loop, so we don’t need to pass the duty cycle parameter at the moment. If you don’t set the duty cycle when instantiating the PWM object, it will be 0 by default.
Getting the GPIO state
Then, we have a while loop that is always True. This is similar to the loop() function in the Arduino IDE.
We start by getting the button state and save it in the button_state variable. To get the pin state use the value() method as follows:
button_state = button.value()
This returns 1 or 0 depending on whether the button is pressed or not.
Setting the GPIO state
To set the pin state, use the value(state) method in the Pin object. In this case we’re setting the button_state variable as an argument. This way the LED turns on when we press the pushbutton:
led.value(button_state)
Reading analog inputs
To read an analog input, use the read() method on an ADC object (in this case the ADC object is called pot).
pot_value = pot.read()
Controlling duty cycle
To control the duty cycle, use the duty() method on the PWM object (led_pwm). The duty() method accepts a value between 0 and 1023 (in which 0 corresponds to 0% duty cycle, and 1023 to 100% duty cycle). So, pass as argument the pot_value (that varies between 0 and 1023). This way you change the duty cycle by rotating the potentiometer.
led_pwm.duty(pot_value)
Testing the Code
Upload the main.py file to your ESP32 or ESP8266. For that, open uPyCraft IDE and copy the code provided to the main.py file. Go to Tools > Serial and select the serial port. Select your board in Tools > Board.
Then, upload the code to the ESP32 or ESP8266 by pressing the Download and Run button.
Note: to get you familiar with uPyCraft IDE youn can read the following tutorial – Getting Started with MicroPython on ESP32 and ESP8266
After uploading the code, press the ESP32/ESP8266 on-board EN/RST button to run the new script.
Now, test your setup. The LED should light up when you press the pushbutton.
The LED brightness changes when you rotate the potentiometer.
Wrapping Up
This simple example showed you how to read digital and analog inputs, control digital outputs and generate PWM signals with the ESP32 and ESP8266 boards using MicroPython.
If you like MicroPython, you may like the following projects:
- ESP32/ESP8266 MicroPython Interrrupts
- MicroPython: WS2812B Addressable RGB LEDs with ESP32/ESP8266
- Low Power Weather Station Datalogger using ESP8266 and BME280 with MicroPython
- ESP32/ESP8266 MicroPython Web Server
We hope you’ve found this article about how to control ESP32 and ESP8266 GPIOs with MicroPython useful. If you want to learn more about MicroPython, make sure you take a look at our eBook: MicroPython Programming with ESP32 and ESP8266.
Thanks for reading.
13 thoughts on “MicroPython with ESP32 and ESP8266: Interacting with GPIOs”
Excuse me, but are
DHT, TWI and SPI protocols supported by Python, now (Esp xxx + Arduino support these protocols, I am sure with I2C -tested last week end, almost sure for the 2 others -else docs would be massively wrong – But it may need time to interface with uPython
What is the difference w/r resource greediness between uPython and Arduino
(RAM/stack/Flash/other -I am very naive and ignorant- needs).
Denis,
at a REPL Prompt ( >>>) type:
import machine
help(machine)
You’ll get a long list of functions (and other stuff) which should include:
UART —
SPI —
I2C —
PWM —
ADC —
DAC —
SD —
Timer —
RTC —
All those classes handle the different protocols.
Also, is the ‘official’ reference for Micropthon on the ESP, so have a browse around there.
Dave
Thanks dave, for linking to the official site (is interesting) and for explaining how to use python help (I very seldom use python: always forget how to use its help) and that micropython help is the same (I am still wondering whether I shall use uPython or C++ : C++ eat less resources, is almost as concise, but does not have comfortable help -Arduino classes have, however-
Yes, DHT, TWI and SPI protocols are supported.
I wrote several articles using MicroPython here: lemariva.com/micropython .
A post with sensors (some using with SPI/I2C) is here: lemariva.com/blog/2018/06/tutorial-getting-started-with-micropython-sensors available.
I prefer to use Atom and the PyMakr plugin to load the code and debug it: lemariva.com/blog/2017/10/micropython-getting-started
For the DHT11 you have e.g. the following library: docs.micropython.org/en/latest/esp8266/tutorial/dht.html
MicroPython uses several resources.
* The ESP8266 with only 160kB RAM is very limited for MicroPython, you get usually a memory allocation error, when you import 3/4 libraries. You can compile your files using mpy-cross (read the links above) to reduce the RAM usage.
* On the ESP32, it works really good, especially with the
ESP32-wrover (4MB RAM or 8MB RAM version B).
* With the ESP32-WROOM (512Kb RAM) it works also good. Sometimes you need to compile your files too.
You can play with some code available in my GitHub: github.com/lemariva.
If you have further questions, contact me.
Hi Mauro.
Thank for sharing your projects about MicroPython.
Regards,
Sara 🙂
Hi Sara.
Thanks You!, for publishing about MicroPython. It’s a nice language.
Btw, really nice blog! My blog is really small in comparison with yours.
Let me know if I can help you with some tutorials.
Regards,
Mauro
I see you have a lot of tutorials using ESP32 and micropython too.
I definitely have to take a closer look at your projects.
I’ll tell Rui to take a look at your blog too.
Regards,
Sara 🙂
Well, thanks a lot, Mauro, for useful information w/r resource greediness. I almost decided to buy an ESP32, once I have ended playing (i. e trying Arduino -historical channel – scripts, which is easy -only tiny issue is with 5 volts peripherals…. – ; use wifi, I am not accustomed to) with ESP8266.
There remain a ESP32 resource I do not understand : ESP32 is a dual core, and one can manage (Arduino + freeRTOS) to use both cores (this may be interesting: say, transmitting data, at an unpredictable rate, and samplig other data at regular rate seems vey easy with dual cores (no interrupts). Does uPython manage parallelism?
Remains another detail: I noticed esptool used a 4 Mbauds , and nanoPis/RPis use at most 115200 bauds. I bet (worked with my ESP8266; in the general case, do not not know) having a lower than default/recommanded upload speed does not harm.
Yes, parallel processing is possible using the _thread class. It is only available on the ESP32, and WiPy2.0/3.0 (not on the ESP8266). It looks like this:
import _thread
import time
def ThreadWorker():
while True:
print(“Hello from thread worker”)
time.sleep(2)
_thread.start_new_thread(ThreadWorker, ())
This project includes multitasking: Captive Portal
With respect to the bauds questions. If you are referring to a serial communication, I managed to connect a rpi to a ESP32 without any problem.
Rui,
thanks for the Tutorial.
I notice you don’t worry about debouncing the button, which is OK when you are just lighting an LED (and the eye can’t see the quick ON/OFF sequences). I’m trying to send button presses to a WWW site and get multiple operations every time I press a button.
I’m trying a timer interrupt and it quickly gets messy (setting up the timer alarm, the function to handle the buttons and then the actual main code). Do you know of a function I can use instead?
Dave
Hi Dave.
At the moment, I don’t have a code for what you are looking for.
But there is an example here: docs.dfrobot.com/upycraft/4.1.5%20irq.py.html that you can modify to put in your project. (I haven’t tested the code)
Regards,
Sara 🙂”?
Regards,
Sara | https://randomnerdtutorials.com/micropython-gpios-esp32-esp8266/ | CC-MAIN-2022-40 | en | refinedweb |
Jan 17 2020 03:49 AM
Hello friends, I am just starting out with Azure Sphere IoT. I am trying to build one of the sample code given on GitHub for IoT but I am facing this error every time when I am trying to run it with my IoT Hub. Where could I be going wrong because I have followed all the instructions mentioned in the sample Readme.md file and IoTHub.md file.
Error in monitor built-in event endpoint message: An error occurred during communication with 'DeviceGateway_942fe792a5c6427c8779bb6bc3209405:ihsuprodpnres011dednamespace.servicebus.windows.net:5671'. Check the connection information, then retry.
Thanks.
Jan 17 2020 02:48 PM
Jan 17 2020 02:48 PM
Hi @kenny1597
It looks like it's the connection between the Cloud Explorer in Visual Studio and the IoT Hub that is having some issues. It's trying to connect to the IoT Hub Default Endpoint to monitor messages that arriving in IoT Hub. This can be caused by several things, and the most common ones are if you are behind a firewall that blocks port 5671 which is the port used by AMQP protocol.
The other potential issue is with consumer groups on this IoT Hub default Endpoint. Do you have another app like IoT Hub explorer or your own custom made app that is monitoring messages on the same port? In which case what can happen is that the "Default" consumer group for that endpoint has reached its max # of readers.
Jan 30 2020 11:20 PM
Jan 30 2020 11:20 PM
Jan 31 2020 07:37 AM
Jan 31 2020 07:37 AM
@kenny1597 unfortunately in the Visual Studio extension you cannot setup the Consumer group. However if you are using Visual Studio Code, you can create a new Consumer Group (in the IoT Hub blade in the Azure Portal) and then configure the Visual Studio code extension to use that new Consumer Group for monitoring events.
Feb 12 2020 10:26 PM
Feb 12 2020 10:26 PM
@kenny1597 I have already implemented the sample and added the code in the git hub. | https://techcommunity.microsoft.com/t5/azure-iot/facing-issues-in-building-the-iot-sample/td-p/1114640 | CC-MAIN-2022-40 | en | refinedweb |
Roguelike Tutorial, using python3+tdl, = tdl.Console tdl's Console.draw_rect function for the rectangles.
def render_bar(x, y, total_width, name, value, maximum, bar_color, back_color): #render a bar (HP, experience, etc). first calculate the width of the bar bar_width = int(float(value) / maximum * total_width) #render the background first panel.draw_rect(x, y, total_width, 1, None, bg=back_color) #now render the bar on top if bar_width > 0: panel.draw_rect(x, y, bar_width, 1, None, bg=bar_color)
For extra clarity, the actual value and maximum are displayed as text over the bar, along with a caption ('Health', 'Mana', etc).
#finally, some centered text with the values text = name + ': ' + str(value) + '/' + str(maximum) x_centered = x + (total_width-len(text))//2 panel.draw_str(x_centered, y, text, fg=colors.white, bg=None) panel.clear(fg=colors.white, bg=colors.black) #show the player's stats render_bar(1, 1, BAR_WIDTH, 'HP', player.fighter.hp, player.fighter.max_hp, colors.light_red, colors.darker_red) #blit the contents of "panel" to the root console root.blit(panel, 0, PANEL_Y, SCREEN_WIDTH, PANEL_HEIGHT, 0, 0) = tdl.Console(...) line before the main loop, and the first root = colors: panel.draw_str(MSG_X, y, line, bg=None, fg=color) y += 1
Ready to test! Let's print a friendly message before the main loop to welcome the player to our dungeon of doom:
#a warm welcoming message! message('Welcome stranger! Prepare to perish in the Tombs of the Ancient Kings.', colors.red)
The long message allows us to test the word-wrap. You can now replace all the calls to the standard print with calls to our own message function (all 4 of them). I made the player death message red (colors.red), and the monster death message orange (colors.orange), others are the default. By the way, here's the list of colors. It's very handy, if you don't mind using a pre-defined palette of colors! As mentioned ealier, don't forget that our colors.py file uses underscores (light_red) rather than camel case (lightRed). tdl it's very easy to know the position of the mouse, and if there were any clicks: the tdl.event.get method returns information on both keyboard and mouse activity. See here for more details about mouse motion events, and here for the mouse button events.
We need to restructure the program a little bit to use this combined mouse and keyboard detection. Just before the main loop, add:
mouse_coord = (0, 0)
As our turn-based game gets more complex, we'll have to remove the option REALTIME = False. In the extras at the end of the tutorial, we'll discuss how to re-implement a real-time game.
For now, lets's take this whole block of code
if REALTIME: keypress = False for event in tdl.event.get(): if event.type == 'KEYDOWN': user_input = event keypress = True if not keypress: return else: #turn-based user_input = tdl.event.key_wait()
and chage it to:
keypress = False for event in tdl.event.get(): if event.type == 'KEYDOWN': user_input = event keypress = True if event.type == 'MOUSEMOTION': mouse_coord = event.cell if not keypress: return 'didnt-take-turn'
We now have a mouse_coord global variable that lets use know which tile the mouse pointer is on. This will be used by a new function that returns a string with the names of objects under the mouse:
def get_names_under_mouse(): global visible_tiles #return a string with the names of all objects under the mouse (x, y) = mouse_coord (obj.x, obj.y) in visible_tiles] panel.draw_str(1, 0, get_names_under_mouse(), bg=None, fg=colors.light_gray)
But wait! If you recall, in a turn-based game, the rendering is done only once per turn; the rest of the time, the game is blocked on tdl.event.key_wait. During this time (which is most of the time) the code we wrote above would simply not be processed! We switched to real-time rendering by replacing the tdl.event.key_wait call in handle_keys with the tdl.event.get tdl.setFPS(LIMIT_FPS) before the main loop to limit the game's speed.
That's it! You can move the mouse around to quickly know the names of every object in sight.
The whole code is available here. | http://www.roguebasin.com/index.php?title=Roguelike_Tutorial,_using_python3%2Btdl,_part_7&oldid=45424 | CC-MAIN-2022-40 | en | refinedweb |
TCO18 Round 4 Editorial
SumPyramid
In this problem we are considering two-dimensional arrangements of nonnegative integers into a pyramid-like shape in which each number is the sum of the two numbers that are diagonally below its place:
25
12 13
7 5 8
The task is to calculate the number of pyramids of L levels and top number T.
Observe that the whole pyramid is uniquely determined by the numbers in the bottom row. So let’s see how to calculate top number T based only on these numbers. If we denote the j-th number (0-based) on the r-th level from the top by A[r,j], we see that
T = A[0,0] = A[1,0] + A[1,1] = (A[2,0] + A[2,1]) + (A[2,1] + A[2,2]) = A[2,0] + 2*A[2,1] + A[2,2]
= A[3,0] + 3*A[3,1] + 3*A[3,2] + A[3,3] = …
So it’s easy to observe (and prove) the pattern that T is the sum of numbers from the r-th row multiplied by binomial coefficients:
T = \sum_{j=0}^r A[r,j] * binom(r,j).
That allows us to come up with a simple dynamic programming approach. Let D[i,t] will be the number of pyramids in which the partial sum starting from the i-th number in the last row is equal to t, i.e.
t = \sum_{j=i}^{L-1} A[L-1,j] * binom(L-1,j).
The answer to the problem is, of course, D[0,T]. To calculate D[i,t] we can iterate over all possible values of A[L-1,i]:
D[i,t] = \sum_{v=0}^t D[i+1, t – v * binom(L-1,i)].
Of course we stop iterating when t – v*binom(L-1,i) becomes negative. The size of array D is O(L*T), since we have L levels and each partial sum is bounded by the value of top number T. The calculation of each cell is done in O(T), so the total complexity of the algorithm is O(L*T^2).
With L and T bounded by 1000 this could be too much. But observe that binomial coefficients grow exponentially, and for big L the values binom(L-1,i) in the middle of the last row will be larger than T, thus the corresponding values A[L-1,i] must be zeros, and the loop for these values will be done in constant time.
Therefore the time complexity is much better, and closer analysis can show that it is in fact O(L*T).
Here is a sample implementation in C++ using recurrence with memoization:
const int N = 1010, MOD = 1000000007; int cache[N][N],binom[N][N]; bool incache[N][N]; int rek(int i, int t) { if (i == L) { return t == 0; } if (incache[i][t]) { return cache[i][t]; } int cnt = 0, val = t; while (val >= 0) { cnt += rek(i+1, val); cnt %= MOD; val -= binom[L-1][i]; } incache[i][t] = true; return cache[i][t] = cnt; } binom[0][0] = 1; for (int i = 1; i < L; ++i) { binom[i][0] = binom[i][i] = 1; for (int j = 1; j < i; ++j) { // note that for binom[i][j] > T we do not need exact value binom[i][j] = min(T+1, binom[i-1][j-1] + binom[i-1][j]); } } return rek(0, T);
Polyline Areas
In this problem we are given a rectilinear polyline in the plane and we need to find the area of each finite region that this polyline divides the plane into.
Parsing the program
The polyline is given as a simple program for the robot that moves on the plane and draws the polyline along its path. The robot understands three basic instructions: F (move 1 unit forward), L (turn 90 degrees left) and R (turn 90 degrees right). The program is constructed from these instructions and cycles (possibly nested). Moreover, we can assume that during execution of the program the robot will make at most 250,000 instructions, therefore it is feasible to construct the whole path.
We can do it recursively as follows:
string parse_program(int& i) { string s; while (i < polyline.size() && polyline[i] != ']') { int num = 1; if (isdigit(polyline[i])) { // parse number of repeats num = 0; while (isdigit(polyline[i])) { num = 10*num + polyline[i++] - '0'; } } string comm; if (polyline[i] == '[') { // parse subprogram ++i; comm = parse_program(i); ++i; } else { comm = string(1, polyline[i++]); } for (int j = 0; j < num; ++j) { s += comm; } } return s; } int i = 0; string full_polyline = parse_program(i);
The function parse_program reads subsequent commands until it reaches character ] (in case it was called recursively on a subprogram) or the end of the string (at the end of the program). If the command starts with a digit, it is a cycle, thus the function parses number of repeats, otherwise it sets number of repeats to 1. Then it parses an instruction, and if the next character is [ it recursively parses a subprogram. Then it repeats the generated subprogram specified number of times.
In the above program we parse the subprogram exactly once, and then we just append it in a loop. We could say that since the output string has a limited length, we could instead just parse the subprogram in a loop. But there is a catch here: the subprogram could result in an empty output string, and in this case it is easy to construct a test case which requires exponential time for such solution, e.g.: 10[10[10[10[10[…10[]]]]]].
Constructing the planar graph
We got a string consisting just of basic instructions, which allows us to simulate the path of the robot. This path may cross and overlap itself arbitrarily, so we need to take care of it. Suppose that the robot starts in point (0,0) and heads to the right. During the simulation we will maintain its current position pos and direction dir. We will also store in map edges all the positions it visited together with information in which directions from these positions we have parts of the polyline. Since there are only four directions (right, up, left, down), we can number them from 0 to 3 and in each position store them in a bitmask:
typedef pair<int,int> pii;
const int DX[] = {1,0,-1,0}, DY[] = {0,1,0,-1};
map<pii, int> edges;
pii pos;
int dir = 0;
for (char c : full_polyline) {
if (c == ‘F’) {
edges[pos] |= 1 << dir; // remember outgoing edge
pos = pii(pos.first + DX[dir], pos.second + DY[dir]); // move forward
edges[pos] |= 1 << (dir^2); // remember ingoing edge
} else if (c == ‘R’) {
dir = (dir+3) % 4; // turn right
} else {
dir = (dir+1) % 4; // turn left
}
}
We can treat the structure we have just built as a planar graph. All visited positions are its vertices and for each vertex we maintain edges (at most 4) which connect it to the neighbour vertices.
Calculating the areas of the faces
Now we would like to calculate the area of each finite face of this planar graph. Observe that to do this it is enough to traverse the boundary of a face and use any algorithm for calculating the area of a polygon. How could we traverse a face if we draw this graph on a paper? We could start from any edge along this face and then go along the boundary counter-clockwise. Since we need also to track the direction we are moving, we will maintain a directed edge (each edge of the planar graph could be directed in two ways). In each vertex we must change the edge, remembering that we cannot cross the polyline. This can be done by selecting the next edge in the clockwise direction from this vertex. We repeat the process until we hit the edge we started from. On the following picture there is a planar graph built from a polyline encoded by a program F2[FRFR3FRFL]RFF and traversal of one of its faces:
Note that the boundary of a face is a polygon, but not necessarily simple (it can touch and overlap itself). But this should not be a problem for standard algorithms for calculating the area.
In order to do the above process for all faces, we could just keep track of visited directed edges and iteratively pick up non-visited edge, and go around the border. Note that this way we also calculate the area of the infinite face of the planar graph. But we can easily remove it by noting that this area will be the largest one, or if we calculate the signed area, this area will have non-positive sign.
vector<long long> ans; set<pair<pii,int> > vis; for (auto i : edges) { for (int d = 0; d < 4; ++d) if (i.second & 1 << d) { pii pos = i.first; int dir = d; if (vis.find(make_pair(pos, dir)) != vis.end()) { continue; } // handle non-visited directed edge (pos, dir) long long area = 0; while (vis.find(make_pair(pos, dir)) == vis.end()) { vis.insert(make_pair(pos, dir)); pii npos = pii(pos.first + DX[dir], pos.second + DY[dir]); // move forward area += (pos.first - npos.first) * pos.second; // use Green's formula to update signed area for (int j = 0; j < 4; ++j) { // find next edge in clockwise direction along npos int ndir = ((dir^2)+4-1-j) % 4; if (edges[npos] & 1 << ndir) { dir = ndir; break; } } pos = npos; } if (area > 0) { ans.push_back(area); } } } sort(ans.begin(), ans.end()); if (ans.size() > 200) { ans.erase(ans.begin()+100, ans.end()-100); } return ans;
The time complexity of the algorithm is O(L log L), where L denotes the length of the full polyline. It could be speeded up to O(L) by using hash tables for edges and vis.
The above algorithm could be generalized to calculate areas of faces in any planar graph. However, a special form of graph in this problem made this task much easier. In the general case we would have to sort all edges along the vertices. And also if the graph is disconnected (which here cannot happen, since it is built from a single polyline), we would have to handle a case of a face inside another face.
SequenceEvolution
In this problem we start from a sequence A of length N generated by a linear pseudorandom generator, and we perform a series of steps on it. A single step removes the first B elements of the sequence and appends the sum of these elements to the end of the sequence. We would like to know the first and the last element of the sequence after S steps.
Let’s look at an example for N=5 and B=2. We denote the subsequent elements appended during the evolution of the sequence by E[N], E[N+1], and so on:
The first observation is that each appended element will be a sum of consecutive elements of the original sequence (treated as a cycle). To be more formal: if we unwind the original sequence A, that is for indices i >= N we define A[i] = A[i mod N], then each appended element E[i] can be represented as a pair (s, k), which denotes that this element is the sum of k elements from the original sequence A, starting from the s-th element, i.e. E[i] = A[s] + A[s+1] + … + A[s+k-1].
Therefore solution to the problem consists of two parts:
- finding pairs (s, k) for the first and the last element of the final sequence,
- calculating the sums A[s] + A[s+1] + … + A[s+k-1] for these pairs.
Linear recurrence and fast matrix exponentiation
The second part of the solution is more standard, so we begin with it. The sequence A is generated using linear recurrence, and this can be rewritten using matrix notation:
If we unwind this recurrence, we get:
This allows us to calculate the value of the i-th element in the original sequence A in time O(log i) by using fast matrix exponentiation.
But the same technique can be applied to calculate a sum of any consecutive elements of the sequence. First observe that each sum can be obtained using only prefix sums. If we denote P[i] = A[0] + A[1] + … + A[i-1], then we have
A[s] + A[s+1] + … + A[s+k-1] = P[s+k] – P[s-1]
And calculating prefixes P[i] can be also done with a slightly bigger matrix:
The following C++ contains full implementation of the above ideas. Later we will use function sum_elems which works in time O(log N):
typedef long long ll; const int MOD = 1000000007, M=4; struct mat_t { int m[M][M]; }; mat_t mat_mult(const mat_t& A, const mat_t& B) { mat_t C; REP(i,M) REP(j,M) C.m[i][j] = 0; REP(i,M) REP(k,M) REP(j,M) { C.m[i][j] = (C.m[i][j] + ll(A.m[i][k]) * B.m[k][j]) % MOD; } return C; } mat_t mat_pow(mat_t A, ll n) { mat_t ans; REP(i,M) REP(j,M) ans.m[i][j] = i==j; while (n) { if (n&1) ans = mat_mult(ans, A); A = mat_mult(A, A); n /= 2; } return ans; } mat_t MAT; void init_mat() { REP(i,M) REP(j,M) MAT.m[i][j] = 0; MAT.m[0][0] = c1; MAT.m[0][1] = c2; MAT.m[0][2] = add; MAT.m[1][0] = MAT.m[2][2] = MAT.m[3][1] = MAT.m[3][3] = 1; } int sum(ll k) { // sum_{i=0..k-1} A[i] (for k <= N) mat_t mat = mat_pow(MAT, k); int ans = ( ll(mat.m[3][0]) * a1 + ll(mat.m[3][1]) * a0 + mat.m[3][2] ) % MOD; return ans; } int sum_cycle(ll k) { // sum_{i=0..k-1} A[i % N] return (ll(sum(N)) * (k/N) + sum(k % N)) % MOD; } int sum_elems(ll s, ll k) { // sum_{i=s..s+k-1} A[i % N] return (sum_cycle(s+k) + MOD - sum_cycle(s)) % MOD; }
Compressed simulation of the process
The first part, i.e. finding the indices of the elements is quite nontrivial if we want to do this fast. In a naive solution we could just simply simulate the process, maintaining pairs (s, k) for each element E[i].
Initially we put pairs (i, 1) for 0 <= i < N on a deque of pairs elems. Then we simulate S steps:
struct elem_t { ll s, k; }; deque<elem_t> elems; for (int i = 0; i < N; ++i) { elems.push_back(elem_t{ i, 1 }); } while (S-- > 0) { int s = elems.front().s, k = elems.front().k; elems.pop_front(); for (int i = 1; i < B && !elems.empty(); ++i) { k += elems.front().k; elems.pop_front(); } elems.push_back(elem_t{ s, k }); } elem_t ef = elems.front(), eb = elems.back(); return vector<int>{ sum_elems(ef.s, ef.k), sum_elems(eb.s, eb.k) };
We use here an invariant that for two consecutive pairs (s1, k1) and (s2, k2) we have (s1 + k1) mod N = s2.
Unfortunately this results in a solution of time complexity O(S*B) which is not acceptable. To speed it up we observe that many consecutive pairs will have the same value of k, and thus we can compress them and handle them together.
We will gather such elements into blocks. Each block is a triple (n, s, k) which denotes that it consists of n consecutive elements of a sequence E described previously by pairs (s, k), (s+k, k), (s+2*k, k), …, (s+(n-1)*k, k).
Now to perform a step we must consider three cases:
- There is exactly one block, and n=1 in this block. Then there is only one element in the sequence and each following step will not change it; we can stop.
- The first block encodes at least B elements. Then we can perform floor(n/B) steps at the same time. We remove floor(n/B)*B elements from the first block, and we add a block (floor(n/B), s, k*B).
- Else we simulate one step by constructing a block with one element.
After that we get block representation of the final sequence and we can easily recover indices for the first and the last element.
struct block_t { ll n, s, k; }; deque<block_t> blocks; blocks.push_back(block_t{ N, 0, 1 }); while (S > 0) { block_t b = blocks.front(); if (blocks.size() == 1 && b.n == 1) { break; } // Case 1 blocks.pop_front(); if (b.n >= B) { // Case 2 ll cnt = min(S, b.n/B); if (b.n - cnt*B) { blocks.push_front(block_t{ b.n - cnt*B, b.s + cnt*B * b.k, b.k }); } blocks.push_back(block_t{ cnt, b.s, b.k*B }); S -= cnt; } else { // Case 3 block_t c = block_t{ 1, b.s, b.n*b.k }; ll x = b.n; while (x < B && !blocks.empty()) { b = blocks.front(); blocks.pop_front(); ll cnt = min(B - x, b.n); c.k += cnt * b.k; if (cnt != b.n) { blocks.push_front(block_t{ b.n - cnt, b.s + cnt * b.k, b.k }); } x += cnt; } blocks.push_back(c); S--; } } block_t bf = blocks.front(), bb = blocks.back(); return vector<int>{ sum_elems(bf.s, bf.k), sum_elems(bb.s + (bb.n-1)*bb.k, bb.k) };
How fast is the above algorithm? In case 3 we produce one block with n=1. In case 2 we remove a block of some n=x, and produce a new block with n <= floor(x/B) and possibly leave smaller n in the first block. But after that we must go to case 3, which removes the first block and produces a block with n=1. So we can say that cases 2 and 3 combined together remove block with some n=x and produce block with n <= floor(x/B) and possibly block with n=1.
Thus during the algorithm we will have exactly one big block (initially with n=N) possibly followed by a block with n=1. As long as B >= 2 the n of this big block will decrease exponentially during every 2+3 case, so there could be at most O(log N) such decreases.
Therefore this part works in O(log N) time, and this is the time complexity of the whole algorithm.
There is one tricky case though. We assumed B >= 2 for exponential decrease, and in fact for B=1 the above algorithm works in O(S/N). But in this case we just S times perform operation of moving the first element from the sequence to the end. So it can be easily solved separately by just calculating the indices after S such moves:
if (B == 1) { // special case return vector<int>{ sum_elems(S, 1), sum_elems(S+N-1, 1) }; } | https://www.topcoder.com/blog/tco18-round-4-editorial/ | CC-MAIN-2019-39 | en | refinedweb |
A namespace that contains all classes that are related to the particle generation.
A function that generates particles in every cell at specified
particle_reference_locations. The total number of particles that is added to the
particle_handler object is the number of locally owned cells of the
triangulation times the number of locations in
particle_reference_locations. An optional
mapping argument can be used to map from
particle_reference_locations to the real particle locations.
Definition at line 30 of file generators.cc. | https://www.dealii.org/developer/doxygen/deal.II/namespaceParticles_1_1Generators.html | CC-MAIN-2019-39 | en | refinedweb |
#include "petscdmda.h" PetscErrorCode DMDAGetOwnershipRanges(DM da,const PetscInt *lx[],const PetscInt *ly[],const PetscInt *lz[])Not Collective
Note: these correspond to the optional final arguments passed to DMDACreate(), DMDACreate2d(), DMDACreate3d()
In Fortran one must pass in arrays lx, ly, and lz that are long enough to hold the values; the sixth, seventh and eighth arguments from DMDAGetInfo()
In C you should not free these arrays, nor change the values in them. They will only have valid values while the DMDA they came from still exists (has not been destroyed).
These numbers are NOT multiplied by the number of dof per node. | https://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/DMDA/DMDAGetOwnershipRanges.html | CC-MAIN-2019-39 | en | refinedweb |
TCO Semifinals 2 Editorial
The competitors of this round are _aid, jcvb, Kankuro, kcm1700, krijgertje, ksun48, Petr, and qwerty787788. The easy is a tricky greedy problem that is easy to get wrong because of cases that need to be carefully considered. The medium is a constructive problem that is hard to analyze at first but has some solutions that are both easy to implement and verify. The last is a very tricky constructive problem that has a lot of small incremental steps that contestants can make progress on, but getting a full solution that covers all cases is hard.
Congratulations to the advancers:
- qwerty787788
- ksun48
- Petr
- _aid
NextLuckyNumber (misof):
In this problem, we are given integers N,K,d, and we want to find the smallest integer M such that M > N and M has exactly K occurences of the digit d.
There are two main cases to be aware of, if d is zero, or d is nonzero.
Let’s first construct the smallest number that consists of exactly k occurrences of the digit d. This is just the number ddd…d (k times), except for when d is zero, in which case we can prepend the number with a 1. We can check if this smallest number is strictly bigger than N, and if so, we can return it.
Now, let’s try to see if we can get an answer that is the same length as N. We can try all possibilities of the length of the common prefix (i.e., number of most significant digits they share), then all possibilities for the next digit in the new number. Once those are fixed, we have the guarantee that the new number is bigger and we can greedily fill in the missing digits (i.e with 0s and the number of ds needed if d is nonzero, otherwise, with 1s and the number of 0s we need). If we iterate from longest common prefix to shortest, and increment the next number from smallest to largest, we guarantee we are iterating through candidates in increasing order so we can greedily return the first one that we can feasibly fill.
If none of these options work, then M must have more digits than N. Since we already handled the first case earlier, we know that M must be longer than N by exactly one digit. We can start M with 1 and greedily fill in the rest (using a similar strategy to the second case).
An implementation is shown below:
class NextLuckyNumber: def getTicket(self, lastTicket, age, digit): lo = str(digit) * age if digit == 0: lo = "1" + lo if int(lo) > lastTicket: return lo s = str(lastTicket) for prefix in range(len(s)-1, -1, -1): have = s[:prefix].count(str(digit)) for bdig in range(int(s[prefix])+1, 10): nhave = have + int(bdig == digit) if nhave <= age and nhave + len(s)-1-prefix >= age: need = max(0, age - nhave) rem = len(s) - 1 - prefix if digit == 0: return s[:prefix] + str(bdig) + "0" * need + "1" * (rem - need) return s[:prefix] + str(bdig) + "0" * (rem - need) + str(digit) * need if digit == 0: return int("1" + "0" * age + "1" * (len(s) - age)) return int("1" + "0" * (len(s) - age + int(digit == 1)) + str(digit) * (age - int(digit == 1)))
VennDiagrams (misof):
In this problem, we want to construct a Venn diagram with n different categories: a drawing in which each intersection of categories corresponds to some nice connected area.
Drawing the smallest Venn diagram is surprisingly hard, but in this problem we have a lot of freedom, so we can come up with a construction that will be easy to implement. If you could do any dimension rectangle, one easy 2-approximation algorithm is to construct a 2 x (2^n) bitmap where the first row is {0,1,2,…,2^n – 2,2^n – 1} and the second row is full of (2^n – 1). All exact intersections of sets correspond to pixels in the first row (and to the entire bottom row), while each superset of sets contains the entire second row and some subset of the first row so it looks like a comb.
When we have to fit our solution into a 50×50 matrix, all we need is to be a bit more creative
with the shape of the “backbone”.
In the solution below, we have the following pattern for the (2^n – 1) cells:
*.......... *********** *.......... *.......... *.......... *********** *.......... *.......... *.......... *********** *..........
(full left column + full rows that are 4 apart) Remember that this pattern will be put on an infinite 2d grid with all zeros, so the zero cells are still connected.
This leaves plenty of room to attach all the other 1-pixel sets and to make sure that they
can’t form holes.
There are better constructions in terms of guaranteed approximation ratio but the constraints
were loose enough so those were not needed.
class VennDiagrams: def construct(self,N): ret = [[0 for __ in xrange(50)] for ___ in xrange(50)] tot = (1 << N) - 1 for i in xrange(50): ret[i][0] = tot for i in xrange(0,50,4): for j in xrange(0,50): ret[i][j] = tot cnt = 0 for i in xrange(1,50,2): for j in xrange(1,50): ret[i][j] = 0 if cnt >= (1 << N) else cnt cnt += 1 ans = [50,50] for i in xrange(50): ans += ret[i] return ans
RearrangingBoxes (monsoon):
You originally have A x B x H cubes arranged in a cuboid with A rows, B columns, and H height. You would like to remove K of them so that the resulting arrangement is still connected and has the same surface area as the original cuboid (i.e. S = 2(AB + AH + BH)). Each cube’s bottom face must also touch the ground or another cube directly.
The idea is that we start from the A x B x H cuboid and we iteratively remove cubes from it, at all times maintaining the constant surface area S = 2(AB + AH + BH). Assume wlog that A ≤ B.
First of all observe that in theory the minimal volume
we can achieve is
(in particular if S – 2 is divisible by 4 it is achieved by a cuboid of size
).
It will turn out that we can obtain this minimum satisfying the task constraints (that is the solid must be formed from towers in a limited space), except for special case when A = 1 and B is even (it can be shown that then
).
The rough idea is simple: if we treat the solid as a graph (cubes are vertices, and two vertices are connected by an edge if corresponding cubes share a face), then we achieve minimal volume not only for a line graph of size
, but also for any tree of size
.
Thus if
then there is no solution. Otherwise, we will proceed removing cubes.
Observe that removing one cube containing exactly one vertex of the cuboid reduces volume by 1 and does not change surface area. In fact, in the same way we can remove from the cuboid a cuboid of size (A – 1) x (B – 1) x (H – 1), removing cubes one by one in layers. This leaves us “floor” of size A x B and two “walls” of total size (A + B – 1) x (H – 1).
Next observe that we can remove a wall cube that has three neighbors (again this does not change surface area). Thus we can totally remove
towers of size H – 1
from the walls (we just remove every second tower). So if A + B – 1 is odd, every tower is connected only to the floor. (We deal with A + B – 1 even later).
Next we can do the same idea with (A – 1) x (B – 1) part of the floor, by removing
segments of length B – 1. If A – 1 is even we are done: the remaining cubes form a tree graph, so the volume is in fact minimal (equal to
).
If A – 1 is even but B – 1 is odd we can do symmetric thing
(see left picture below with A = 5, B = 7, where we observe the warehouse from the top, light gray cells denote towers of height 1, dark gray cells denote towers of height H).
If A – 1 and B – 1 are odd, we can create an almost tree like in the right
picture below (almost tree is the best we can get here, since A and B are even,
thus S is divisible by 4):
In this picture, the dark squares are towers of height H, the gray squares are towers of height 1, and the white squares are towers of height 0.
So the only case left is what to do with two towers of size H next to each other when A + B – 1 is even.
If A = B (and thus B is even) we cannot do anything (thus this is the special case of greater
we have mentioned before).
Otherwise wlog let’s assume that B is even and we have a following picture (with A = B, B = 4) where two towers of height H are next to each other:
Call these two towers TL (left) and TR (right). If A ≥ 5 or B & ge; 4 we have at least one tower (call it T) of height 1 which does not have a neighboring tower of height H.
If we remove two cubes from TR and add one cube to T, the volume reduces by 1, and the surface area doesn’t change.
If this leaves TR of size 2, then it means that was even, thus S divisible by 4 and we are done. This needs special treatment for A = 3 and B = 2, but it also can be done.
Time complexity of the algorithm is O(AB).
Sample code:
public class RearrangingBoxes { public int[] rearrange(int A, int B, int H, long K) { long V = (long)A*B*H; boolean swapped = false; if (A > B) { int temp = A; A = B; B = temp; swapped = true; } long Area = 2*((long)A*B + (long)A*H + (long)B*H); long minV = ((Area-2) + 2)/4; if (A == 1 && B%2 == 0) { minV += (H-1)/2; } if (V-K < minV) { return new int[0]; } int[][] sol = new int[A][B]; for (int a=0; a<A; ++a) { for (int b=0; b<B; ++b) { sol[a][b] = H; } } if (A > 1) { long w = Math.min(H-1, K / ((A-1)*(B-1))); for (int a=0; a<A-1; ++a) { for (int b=0; b<B-1; ++b) { sol[a+1][b+1] -= w; } } K -= (A-1)*(B-1)*w; if (w < H-1) { for (int a=A-2; a>=0 && K > 0; --a) { for (int b=B-2; b>=0 && K > 0; --b) { sol[a+1][b+1]--; K--; } } } } int cols = (A+B-2)/2; for (int i=0; i<cols && K > 0; ++i) { long ile = Math.min(H-1, K); K -= ile; int a = i<A/2 ? A-2-2*i : 0; int b = i<A/2 ? 0 : 2*(i-A/2)+1+(A+1)%2; sol[a][b] -= ile; } cols = (B-1)/2; for (int i=0; i<cols && K > 0; ++i) { long ile = Math.min(A-1, K); K -= ile; for (int j=0; j<ile; ++j) { sol[A-1-j][1+2*i]--; } } if (B%2 == 0) { int rows = (A-1)/2; for (int i=0; i<rows && K > 0; ++i) { sol[1+2*i][B-1]--; K--; } } if (A == 2 && (A+B-1)%2 == 0) { if (K > 0) { assert(sol[0][B-1] == H); if ((H-1)%2 == 1) { if (B >= 5) { sol[1][B-3]--; sol[1][B-4]++; sol[0][B-1]--; sol[1][B-1]++; } else { assert(B == 3); sol[0][0]--; sol[0][B-1]--; sol[1][1]++; sol[1][B-1]++; } } int ile = (sol[0][B-1]-1) / 2; sol[0][B-1] -= ile; sol[1][B-1] += ile; for (int i=0; i < ile && K > 0; ++i) { sol[0][B-1]--; K--; } } } else if (A >= 3 && (A+B-1)%2 == 0) { int ile = (sol[0][B-1]-1) / 2; for (int i=0; i < ile && K > 0; ++i) { sol[0][B-1] -= 2; sol[A-1][B-1]++; K--; } } else { assert(K==0); } int[] answer = new int[A*B]; for (int i=0; i<A*B; ++i) { answer[i] = swapped ? sol[i%A][i/A] : sol[i/B][i%B]; } return answer; } };
| https://www.topcoder.com/blog/tco-semifinals-2-editorial/ | CC-MAIN-2019-39 | en | refinedweb |
I encountered a problem that occurs in a similar Bug 188114, but not closed last patch
environment for reproduction:
# python -c "import matplotlib as m; print(m.__version__); print(m.__path__[0]);"
1.4.0
/usr/local/lib/python2.7/site-packages/matplotlib
# python -c "import numpy as m; print(m.__version__); print(m.__path__[0]);"
1.9.1
/usr/local/lib/python2.7/site-packages/numpy
# python -c "import wx as m; print(m.__version__); print(m.__path__[0]);"
2.8.12.1
/usr/local/lib/python2.7/site-packages/wx-2.8-gtk2-unicode/wx
/usr/local/share/examples/py-matplotlib % python user_interfaces/embedding_in_wx5.py
Traceback (most recent call last):
File "user_interfaces/embedding_in_wx5.py", line 7, in <module>
import matplotlib as mpl
File "/usr/local/lib/python2.7/site-packages/matplotlib/__init__.py", line 179, in <module>
from matplotlib.cbook import is_string_like
File "/usr/local/lib/python2.7/site-packages/matplotlib/cbook.py", line 32, in <module>
import numpy as np
File "/usr/local/lib/python2.7/site-packages/numpy/__init__.py", line 170, in <module>
from . import add_newdocs
File "/usr/local/lib/python2.7/site-packages/numpy/add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "/usr/local/lib/python2.7/site-packages/numpy/lib/__init__.py", line 18, in <module>
from .polynomial import *
File "/usr/local/lib/python2.7/site-packages/numpy/lib/polynomial.py", line 19, in <module>
from numpy.linalg import eigvals, lstsq, inv
File "/usr/local/lib/python2.7/site-packages/numpy/linalg/__init__.py", line 51, in <module>
from .linalg import *
File "/usr/local/lib/python2.7/site-packages/numpy/linalg/linalg.py", line 29, in <module>
from numpy.linalg import lapack_lite, _umath_linalg
ImportError: /lib/libgcc_s.so.1: version GCC_4.6.0 required by /usr/local/lib/gcc48/libgfortran.so.3 not found
user_interfaces/embedding_in_wx2-4.py - work fine
Maintainer CC'd
Problem solves the global definition in libmap.conf
libgcc_s.so.1 gcc48/libgcc_s.so.1
But as I understand it, so globally determine not very good, I could not tell whether someone constraint for libmap?
(In reply to Andrey Fesenko from comment #2)
[/usr/local/lib/python2.7/site-packages/wx-2.8-gtk2-unicode/]
libgcc_s.so.1 gcc48/libgcc_s.so.1
[/usr/local/lib/python2.7/site-packages/wx-2.8-gtk2-ansi/]
libgcc_s.so.1 gcc48/libgcc_s.so.1
two conditions are necessary, since different programs can be used ansi or unicode
Created attachment 155565 [details]
patch-Makefile x11-toolkits/py-wxPython28
Add USES fortran, fix this bug.
Created attachment 155570 [details]
patch-Makefile x11-toolkits/py-wxPython28
fix PORTREVISION
QA results would help here:
- portlint -AC output (as attachment)
- poudriere testport (or bulk -t) output (as attachment)
Created attachment 155687 [details]
poudriere bulk -t log py-wxPython28
Port x11-toolkits/py-wxPython28 have Maintainer: python@FreeBSD.org
Bug was first seen makes it possible for math/py-matplotlib his Maintainer: mainland@apeiron.net
x11-toolkits/py-wxPython28 # portlint -AC
WARN: Makefile: [125]: do not use muted INSTALL_foo commands (i.e., those that start with '@'). These should be printed.
WARN: Makefile: Consider adding support for a NLS knob to conditionally disable gettext support.
WARN: Makefile: [19]: possible direct use of command "python" found. use ${PYTHON_CMD} instead.
WARN: Makefile: [0]: possible direct use of command "echo" found. use ${ECHO_CMD} or ${ECHO_MSG} instead.
WARN: Makefile: [133]: possible use of "${CHMOD}" found. Use @(owner,group,mode) syntax or @owner/@group operators in pkg-plist instead.
WARN: Makefile: for new port, make $FreeBSD$ tag in comment section empty, to make SVN happy.
WARN: Makefile: new ports should not set PORTREVISION.
WARN: Makefile: Consider defining LICENSE.
WARN: Makefile: "PKGNAMESUFFIX" has to appear earlier.
WARN: /usr/ports/x11-toolkits/py-wxPython28/files/patch-config.py: patch was not generated using ``make makepatch''. It is recommended to use ``make makepatch'' to ensure proper patch format.
WARN: /usr/ports/x11-toolkits/py-wxPython28/files/patch-setup.py: patch was not generated using ``make makepatch''. It is recommended to use ``make makepatch'' to ensure proper patch format.
0 fatal errors and 11 warnings found.
If you want, I can try to remove WARNINGS.
There is also a related ports x11-toolkits/py-wxPython28-common (have portversion) x11-toolkits/py-wxPython28-unicode (not have), how best to standardize?
Created attachment 156206 [details]
patch-Makefile and files x11-toolkits/py-wxPython28
Update patch for port, Fix warnings portlint -AC
Created attachment 156207 [details]
poudriere bulk -t log py-wxPython28
update poudriere log
Created attachment 156208 [details]
poudriere portlint -AC py-wxPython28
actual portlint -AC py-wxPython28
For reproduce
Install (default options
x11-toolkits/py-wxPython28-unicode
math/py-matplotlib
run
python /usr/local/share/examples/py-matplotlib/user_interfaces/embedding_in_wx5.py
this test crash
Any news here? | https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=196862 | CC-MAIN-2019-39 | en | refinedweb |
For my first post, I figured that I would leverage a prototype that I worked on last week. I was investigating some of the issues that one of the Fab40 templates has after you upgrade the server from 2007 to 2010. I learned that the Knowledge Base template has a novel way of solving one of the problems that lookup fields have when using a document library.
Before we dig into the solution to the problem, I should first spend some time describing the problem.
When you create a lookup field, you need to choose a field from the “other” list to be used as the display value. When you are editing an item from “this” list, you will be shown a drop-down list box that allows you to choose an item from the “other” list. When each of the items from the other list are added to the drop-down list, some text value from the other item needs to be displayed here that uniquely identifies the item. Some fields work better than other fields.
For example, let’s say that you are adding a lookup field that is going to be used to select someone from a list of contacts. Using the Zip Code as the field that is displayed in the drop-down list isn’t likely to be very helpful. It would be extremely common for you to have multiple contacts that are in the same zip code. You would then see the same zip code listed more than once in the drop-down list and you would have no idea which entry represents which contact. Choose a contact would be a pretty frustrating experience. Using a field like Name would be a lot better. Granted, the Name field could also contain duplicates. But it would still be a whole lot better than a field like Zip Code which is likely to contain many duplicates.
The problem we have is when the “other” list is a document library. Which field should you choose for the lookup field to display? You could use the Title field but it doesn’t always get populated. Upload some text files into a document library and they will all wind up with an empty value for the Title field. Even if a Title value is specified for each document, this field allows duplicates which, as described above, is not good. So, is there anything about the documents in the document library that is normally going to be unique? Well, the file name. Duh! However, try creating a lookup field to a document library and you’ll notice that there is no file name field to choose. Doh!
The file name is exposed through the object model as a field named FileLeafRef. So why isn't this field available to use in lookup fields? I’m still trying to find someone that can explain that to me. If I figure it out I’ll update this post.
Ok, so now we can (finally) get back to the Knowledge Base site template that I was looking into and how it solved this problem.
One of the features that the Knowledge Base site template has is that articles written in the wiki library can each have one or more related articles. This was accomplished using a multi-value lookup field. Now, multi-valued lookup fields don’t exactly have the greatest UI experience. But they do allow the KB article author to accomplish the scenario without the KB site template developer having to invest in custom (read: expensive) list item selection UI.
But if we're using a multi-value lookup field to choose a document from a document library, what was the field that was used as the lookup field? Well, what the KB site template did was to add a text field to the document library called “Name” in conjunction with an event handler that would ensure that the value of this field always contained the actual name of the file. As wiki articles were created or modified, the event handler would always ensure that the Name field contained the right value.
Now, the way that the KB site template did this was specific to this site template and isn’t really reusable in other site templates. The concept is reusable but the specific implementation in that site template isn’t. So I created a sandboxed solution that contains a reusable implementation of this concept.
Using this solution is pretty easy:
- Upload the solution - Upload the solution to the Solution Gallery in your site collection and activate it.
- Activate the site feature - In each of the sites where you want to use this File Name field, activate the site feature named “BillGr's File Name Field”.
- Add the site column - In each document library where you want to use this field, add the site column named File Name to either the document library or the content types that are used in the document library.
Solution Implementation
In this solution, you’ll find three things:
- A site column that defines the File Name field.
- An assembly that contains a sandboxed event handler.
- A feature (which can be activated/deactivated) with two element manifests:
- One that defines the File Name field.
- One that wires-up the event handler to the ItemAdding and ItemUpdating events.
“File Name” Site Column Definition
One thing to note about the site column is that it has been defined with a very unique internal name (the Name attribute). This is to deal with field name collisions. What if a user has already defined a File Name column? What if they already have other data in that column? What if they already have business logic that is driven by the value of that field. My event handler can’t simply look for the first field named “File Name” that it finds and blindly overwrite its value with the current file name. This is one of the reasons why a field definition has both an internal name and a display name. As an aside, the other main reason for internal names is so that compiled code doesn’t need to be changed when the name of a field is localized into another language. But that’s a subject for a different post.
- <?xml version="1.0" encoding="utf-8"?>
- <Elements xmlns="">
-
- <!-- NOTE: The Name attribute here must match the value of the FileNameFieldInternalName property
- of the ListItemEvents class that is in ListItemEvents\ListItemEvents.cs -->
- <Field ID="{1511BF28-A787-4061-B2E1-71F64CC93FD5}"
- Name="BillGrFileNameField_FileName"
- DisplayName="File Name"
- Type="Text"
- Required="FALSE"
-
- </Field>
-
- </Elements>
In this case, we’ve given our field an internal name that uses the solution name as a prefix. It is highly unlikely that another solution will define another field with this same internal name. Later, our event handler will use the same internal name when accessing the field value from the list item.
Event Handler Wire-Up
When a feature definition includes an event handler, usually the event handler has a ListTemplateId or ListUrl attribute. These attributes are normally used to narrow down the scope of the event handler to a specific list type. Since I haven’t included any of these attributes, my event handler will be globally registered. It will be called whenever any item in a list or document library is added or changed – anywhere in the site.
- <?xml version="1.0" encoding="utf-8"?>
- <Elements xmlns="">
- <Receivers>
- <Receiver>
- <Name>ListItemEventsItemAdding</Name>
- <Type>ItemAdding</Type>
- <Assembly>$SharePoint.Project.AssemblyFullName$</Assembly>
- <Class>BillGr.Samples.FileNameField.ListItemEvents</Class>
- <SequenceNumber>10000</SequenceNumber>
- </Receiver>
- <Receiver>
- <Name>ListItemEventsItemUpdating</Name>
- <Type>ItemUpdating</Type>
- <Assembly>$SharePoint.Project.AssemblyFullName$</Assembly>
- <Class>BillGr.Samples.FileNameField.ListItemEvents</Class>
- <SequenceNumber>10000</SequenceNumber>
- </Receiver>
- </Receivers>
- </Elements>
Event Handler Implementation
The event handler simply ensures that the value of the File Name field is correct given the current name of the document. The code relatively simple. Whenever we’ve been notified that an item has changed, we first have to confirm that the item is in a document library. If it is a “regular” list item, then it won't have a file so there isn’t any file name to stay in sync with. Now, please, no arguments here about Attachments <grin />. SharePoint treats the primary file stream of a document library item very differently from an attachment to a list item.
In addition to confirming that the item is in a document library, the event handler also needs to confirm that the item has a “File Name” field. If it doesn’t have the field then, again, there isn’t anything that we need to do.
Once we’ve ruled out these other scenarios, all that remains is to extract the file name from the URL and then ensure that the value of the “File Name” field contains this same file name.
- using System;
- using System.IO;
- using System.Security.Permissions;
- using Microsoft.SharePoint;
- using Microsoft.SharePoint.Security;
- using Microsoft.SharePoint.Utilities;
- using Microsoft.SharePoint.Workflow;
-
- namespace BillGr.Samples.FileNameField
- {
- /// <summary>
- /// List Item Events
- /// </summary>
- public class ListItemEvents : SPItemEventReceiver
- {
-
- /// <summary>
- /// Gets a string containing the internal name of the "File Name" field.
- /// </summary>
- public static string FileNameFieldInternalName
- {
- get
- {
- // NOTE: The value used here has to match the value of the Name attribute in the field
- // definition that is in FileNameSharedField\Elements.xml.
- return "BillGrFileNameField_FileName"; // Ensure that this is globally unique by using a prefix that is not likely to be used by someone else.
- }
- }
-
- /// <summary>
- /// An item is being added.
- /// </summary>
- /// <param name="properties">An SPItemEventProperties object that represents properties of the event.</param>
- public override void ItemAdding(SPItemEventProperties properties)
- {
- ListItemEvents.UpdateFileNameField(properties);
- properties.Status = SPEventReceiverStatus.Continue;
- }
-
- /// <summary>
- /// An item is being updated.
- /// </summary>
- /// <param name="properties">An SPItemEventProperties object that represents properties of the event.</param>
- public override void ItemUpdating(SPItemEventProperties properties)
- {
- ListItemEvents.UpdateFileNameField(properties);
- properties.Status = SPEventReceiverStatus.Continue;
- }
-
- /// <summary>
- /// Helper method that ensures that the "File Name" field contains the correct value for the specified list item.
- /// </summary>
- /// <param name="properties">An SPItemEventProperties object that represents properties of the event.</param>
- private static void UpdateFileNameField(SPItemEventProperties properties)
- {
- // We can only do this if the item is in some form of a doc-lib and the content type of
- // the item includes the "File Name" field.
- if ((properties.ListItem == null) ||
- (properties.ListItem.File == null) ||
- (!properties.ListItem.Fields.ContainsField(ListItemEvents.FileNameFieldInternalName)))
- {
- return;
- }
-
- // Extract just the file name from the server-relative URL. Note: URLs with forward slashes
- // are compatible with the System.IO.Path class. We are using that here so that we don't
- // have to do any URL parsing. Any time that we try and do URL parsing on our own we
- // eventually get into trouble beause of some corner case that wasn't accounted for in
- // the parsing code. We'll avoid that problem if we leverage the Path class which has
- // already been extensively tested.
- string fileNameWithoutExtension = Path.GetFileNameWithoutExtension(properties.AfterUrl);
-
- // Ensure that the "File Name" field contains the current file name.
- properties.AfterProperties[ListItemEvents.FileNameFieldInternalName] = fileNameWithoutExtension;
- }
-
- } // class ListItemEvents
-
- }
Downloading The Sample
There are two downloads available from the MSDN Code Gallery for this sample:
- SharePoint Solution - This is a working .WSP that you can upload to the Solution Gallery for your site collection and activate it. Once you do this, the “BillGr's File Name Field” feature will be available to activate in all sites of your site collection. After activating the feature, you can add the site column named “File Name” to a document library to start getting updated file names. You'll find this site column in the “Custom Columns” group.
- Visual Studio Project - This is the VS 2010 SharePoint solution that was used to generate the solution above.
As with any of my other samples, you are free to use the solution in your site or take the source code and leverage it in some other solution that you are working on. Just keep in mind that this is a sample and, as such, has no warranty, no guarantee that it will work in all environments, and no promise that a future version of SharePoint won't break it.
Is this WSP for SharePoint 2007 or SharePoint 2010?
The concept works on either SharePoint 2007 or SharePoint 2010. The sample that you can download is a sandboxed solution for use with SharePoint 2010.
can you get the filename to display in a column for a task list and not a document library? Ex. A list item has an attachment and you want a column to display the name of the attachment? (working with sharepoint 2007)? | https://blogs.msdn.microsoft.com/billgr/2010/10/15/enabling-file-names-in-lookup-fields/ | CC-MAIN-2019-39 | en | refinedweb |
Tutorial: Create a blog using unite cms + vue.js
In this tutorial we will create a simple blog using a single page vue.js app that displays articles, stored in unite cms. The whole vue.js project can be found in this repository.
unite cms is an open source CMS written in PHP with a headless architecture. As a (frontend) developer you only get a GraphQL API endpoint and can choose any technology (vue.js in this case) to create an app that consumes this API.
Technology Stack
- unitecms.io cloud for the content management part (Note: In this tutorial we are using the unitecms.io cloud. You can also install unite cms using composer on your own server)
- vue-cli for creating a new vue project
- vue-apollo to consume unite cms’ GraphQL API
Creating the content structure
unite cms is a content management system, that allows you to manage content for any kind of application (and not just Blogs). Because of this, the CMS comes without any default content types and we need to create a appropriate structure for our blog.
Create your first unite cms domain
To create your first unite cms domain, you need to create a new account at unitecms.io allows you register a free account at. During registration, you need to create an Organization. This will be your unitecms.io subdomain namespace where all your data will be stored.
After creating your account, you will get redirected to your very own unite cms subdomain where all your content will be stored.
unite cms allows you to create different Domains to create logical groups of your content types. Lets create a new Domain for our blog. A unite cms Domain consists of content-types that allows to manage repeatable content, setting-types that hold a single setting element, and domain-member-types that define access to this domain. In this tutorial we will only create content- and setting-types but stick with the two default domain-member-types (“editor” and “viewer”). More infos about domain member types can be found in the docs.
Our sample blog will show multiple articles and a single about page so we will create an “articles” content-type and a “website” setting-type that allows us to manage the content of the about page as well as a general page header and footer text.
After clicking “+ Add a domain” you can insert the following domain schema, that defines our content structure:
Note: We want to save an image with each article. unite cms let you manage images using an image field type that stores images in a S3 bucket. Before you can actually create the blog domain, you need to replace the bucket placeholders with real s3 bucket (For example an amazon aws bucket. Endpoint would be).
If your domain schema is valid, a new domain will be created and you can add your first blog article as well as your website config:
Create an API key to access the content
unite cms automatically generates a GraphQL API endpoint for each of your domains. However before we can explore the endpoint, you need to create an API key and add it as an Viewer to the Blog Domain to grant read access.
API keys are managed on Organization level, so navigate to your Organization overview page and create a new API key, lets call it “Public Blog Key”.
After creating the API key (allowed origin: * is fine for the moment), copy the access token (we will need it in the next step), navigate back to the blog Domain and add the API key as a Viewer Member.
Note: Default domain permissions where generated automatically when you created the domain (all members can view, editors can update and delete, Organization admins bypass permissions). You can always check the “Update domain” page, to see permissions for all content and setting types.
Exploring the GraphQL API
A GraphQL allows you to write queries against the API endpoint in order to control which data should be returned (A good starting point is)/. To explore our GraphQL API endpoint, I’m using the greate GraphiQL client as a Chrome extension.
Now enter your API endpoint ({organization}.unitecms.io/{domain}/api?token={token}). If everything was set correctly, you now see a documentation explorer of all available objects on the right side:
Let’s try to get all articles, ordered by date as well as your website settings:
Create the vue.js app to display the articles
The last step of this tutorial is to use the GraphQL API in a vue.js application (The full project source can be found on Github). To setup the development environment, we are using yarn and the vue-cli:
NOTE: Apollo comes with many powerful tools, that do a lot of stuff for you out of the box (like caching). If you prefer to write your own logic you can use this minimal graphql client as an alternative.
Create the basic project structure
Our app starting point is the main.js file where vue, and the apollo-client will be initialized and our pages will be registered as routes:
By using vue-apollo it is very easy to connect a GraphQL query result to the UI. Things like fetching the content, updating component properties, caching etc. are working out of the box. This is the very lean and straightforward about-component, that displays a page with the about text:
Following this pattern, all other components can be created in a matter of minutes. Together with a a little bit of css code, your first application, powered by unite cms is almost complete. Just run `yarn build` and a production-ready version of your code will be generated, that should look something like this:
I hope, that our first tutorial was fun to read and easy to understand. If you want to get more information about unite cms, you can check out the docs. For questions, bug reports or feature request, you can create a new issue at our github repository. | https://medium.com/unite-cms/create-a-blog-using-unite-cms-vue-js-2784743bb488?source=collection_home---6------4----------------------- | CC-MAIN-2019-39 | en | refinedweb |
Created
5 November 2007
Requirements
Prerequisite knowledge
As a great deal of this article is about developing your own software component, I assume that you are comfortable coding in Java. You should also have a basic working knowledge of the LiveCycle ES workflow environment, including the LiveCycle ES Workbench tool.
User level
Intermediate
This is Part 1 of the series. The goal of this article is to introduce the basic environment around a custom Document Service Component (DSC) and to show how you can go about developing one yourself. Part 2 will expand on that information and demonstrate more advanced Java constructs (beans, enumerated types, etc.).
With Adobe LiveCycle ES software, you have the ability to extend the product's functionality using Java classes. This means that you can develop your own Java classes that act as operations within a LiveCycle ES workflow. Extending the LiveCycle product in this way enables you to add your own functionality, interact with your existing programs, and integrate with third-party applications. Once a component is deployed to LiveCycle ES software, the services contained within the component become LiveCycle services.
Component Architecture
Document Service Components consist of three main elements (see Figure 1):
- Service class(es): your Java classes containing the code that is executed when the service is run. The service will have at least one class and quite often will contain several classes, depending on the complexity of the project.
- Resources file(s): additional libraries, images, property files, and other resources required by the service. This may include utility JAR files as well as the icons for your DSC. (I'll explain the icons later.)
- Component file: a single component.xml file describing all aspects of your component to LiveCycle ES. The component.xml file is used for the DSC property sheet as well as several other operations.
These files will be combined into a single JAR file that will be imported into LiveCycle ES software using the Workbench tool.
A simple DSC
The easiest way to explain DSC development is to build a custom component. Here I will use a very simple "Hello, world" example consisting of a single Java class file and the required component.xml.
The Java class
Create a simple project and Java class file using your favorite editor (I prefer Eclipse, but the choice is up to you) called helloComponent.java. Put it in whatever package you want; I'm using com.adobe.sample. Add a method that takes a string and returns another. You can then add some code to return a string that includes the input string—something like this:
package com.adobe.sample; public class helloComponent { public String hi(String yourName){ return "Hello, " + yourName; } }
Compile the class into a .class file.
The Component XML file
Okay, now that you have a simple Java class, add the component descriptor file. To be frank, it's a bit of a pain to write the XML file from scratch. You can copy one from an existing component (or from this article).
<component xmlns=""> <component-id>com.adobe.sample.helloComponent</component-id> <version>8.0.0</version> <supported-connectors/> <supports-export>true</supports-export> <services> <service name="helloComponent" orchestrateable="true" title="Hello Component"> <hint>A simple component to show how to build a DSC</hint> <auto-deploy <implementation-class>com.adobe.sample.helloComponent</implementation-class> <operations> <operation name="hi"> <hint>Returns a string</hint> <input-parameter <hint>Put your name here</hint> <supported-expr-types>Literal,XPath,Template,Variable</supported-expr-types> </input-parameter> <output-parameter <description>A message from the component</description> <hint>A message from the component</hint> </output-parameter> <description>A Hello, world component</description> </operation> </operations> </service> </services> </component>
This sample includes the following tags:
component-id: a unique identifier for your component.
version: the version info for your component
supports-export: specifies whether the component should beincluded if you export an archive that uses the component; that is, should thecomponent be exported as part of the LiveCycle Archive (LAR) file.
services: describes the services included in this component(this one only has one service)
service name: the name of the serviceonce it is deployed
auto-deploy: The
category-idattribute is the name of the service category in which this componentwill be deployed.
implementation-class: the package andclass name of your component
operations: These equate to publicmethods in your class. Any methods you wish to expose need to have theirown
operationtag.
operation name: the name of the method
input-parameter: where you list themethod's input parameters. Note that the
typeattributecorresponds to the Java class for that parameter.
output-parameter: the object that isreturned by the method.
Note that there are many other tags, but this is all we needfor this simple example.
Make a JAR file
Now that you have a compiled class file and the propercomponent.xml file, you can create a Java Archive file that will contain all ofthe component parts. The JAR file will be imported into LiveCycle ES software usingthe Workbench tool. You can build the JAR file using a command line tool orusing your IDE; however, there are a few things to keep in mind:
- Include all java class files
- Include all resource files (images, utility jar files, otherresources)
- Include the component.xml; make sure it's at the root.
Note: If you use Eclipse to create your JAR file,make sure that that the "Compress the contents of the JAR file"option is off. You will not be able to import compressed JAR files.
Import your component
The following steps will import your component intoLiveCycle ES software:
- Open LiveCycle Workbench ES and log in to a LiveCycle ES server.
- Switch to the Components window (if you don't see the Components window,choose Window > Show Views > Components from the menu bar).
- Right-click the root of the Components tree and choose "InstallComponent…"
- Browse to your JAR file and click Open.
- Expand the Components tree and find your component (the components arelisted by package and class name).
- Right-click the component and choose Start Component.
The component should now appear in the Services tab underthe category you specified in the component.xml file (see Figure 2). You cannow use your component in your workflow process. Note that the property sheetwill reflect the information you put in the component.xml file.
Where to go from here
This article shows a pretty simple component, but it is a starting point. To learn how to create an advanced service component, read Part 2 of this series. For more in-depth information on developing components, please refer to the official Adobe documentation. | https://www.adobe.com/devnet/livecycle/articles/dsc_development.html | CC-MAIN-2019-39 | en | refinedweb |
dva 1.0 — a lightweight framework based on react, redux and redux-saga
Hey,
- If you like redux;
- If you like concepts from elm;
- If you want your code clean enough;
- If you don’t want to memeber to mush APIs; (only 5 methods)
- If you want to handle async logic gracefully;
- If you want to handle error uniformly;
- If you want to use it in pc, h5 mobile and react-native;
- If you don’t want to write showLoading and hideLoading hundreds of times;
- …
What’s dva
Dva is a lightweight, react and redux based on, elm style framework which aims to make building React/Redux applications easier and better.
If you like react/redux/redux-saga/react-router, you’ll love dva. :ghost:
This is how dva app is organized, with only 5 api.
import dva, { connect } from ‘dva’;
// 1. Create app
const app = dva();
// 2. Add plugins (optionally)
app.use(plugin);
// 3. Register models
app.model(model);
// 4. Connect components and models
const App = connect(mapStateToProps)(Component);
// 5. Config router with Components
app.router(routes);
// 6. Start app
app.start(‘#root’);
How dva works
View Concepts for more on Model, Reducer, Effect, Subscription and so on.
Why is it called dva
dva is a hero from overwatch. She is beautiful and cute, and dva is the shortest and available one on npm when creating it.
Who are using dva
Packages dva built on
- views: react
- models: redux, react-redux
- router: react-router
- http: whatwg-fetch
You can:
- View dva offical website
- Getting Started and familar with concepts by creating a count app
- Examples like dva-hackernews | https://medium.com/@chenchengpro/dva-1-0-a-lightweight-framework-based-on-react-redux-and-redux-saga-eeeecb7a481d | CC-MAIN-2017-43 | en | refinedweb |
XPath is the XML Path Language for defining how a specific element in a XML document can be located. It's sort of like the '#' convention in HTML URLs but for XML.
XPath is a defines a syntax and specification for addressing
different parts of an XML document. It can also be used to address
functions in a library.
An XPath expression, when evaluated, results in a set of nodes, a
boolean, a number, or a string. XPath expressions are evaluated in a
context, which in most cases will be some node (the context node), but
which can also be a namespace, a function library, a set of variable
bindings, or a pair of non-zero integers (context position and
size).
The context is usually determined by the system doing the
processing; a XSLT processor for instance.
The nodes in this metanode will sometimes refer to this document fragment to provide some
examples:
<zoo>
<animals>
<dog breed="collie">
<cat breed="tabby">
</animals>
<people>
<person name="John" job="keeper">
<person name="Amy" job="vet">
</people>
</zoo>
Most of this information is condensed from the XPath specs at and from articles on.
Log in or register to write something here or to contact authors.
Need help? accounthelp@everything2.com | https://everything2.com/title/XPath | CC-MAIN-2017-43 | en | refinedweb |
字符集,字符的码,编码方式
字符集,字符的码,编码方式 --一直没有搞清楚它们之间的区别和联系。最近工作做连续遇到这方面的困扰,终于决心,把它们搞清楚了~~!
原文地址:
如果你看明白来,不妨为浏览器做个编码自动识别程序~~!Mozilla的对应程序地址为:
A tutorial on character code issues
Contents
- The basics
- Definitions: character repertoire, character code, character encoding
- Examples of character codes
- Good old ASCII
- Another example: ISO Latin 1 alias ISO 8859-1
- More examples: the Windows character set(s)
- The ISO 8859 family
- Other "extensions to ASCII"
- Other "8-bit codes"
- ISO 10646 (UCS) and Unicode
- More about the character concept
- The Unicode view
- Control characters (control codes)
- A glyph - a visual appearance
- What's in a name?
- Glyph variation
- Fonts
- Identity of characters: a matter of definition
- Failures to display a character
- Linear text vs. mathematical notations
- Compatibility characters
- Compositions and decompositions
- Typing characters
- Just pressing a key?
- Program-specific methods for typing characters
- "Escape" notations ("meta notations") for characters
- How to mention (identify) a character
- Information about encoding
- The need for information about encoding
- The MIME solution
- An auxiliary encoding: Quoted-Printable (QP)
- How MIME should work in practice
- Problems with implementations - examples
- Practical conclusions
- Further reading. This document in itself does not contain solutions to practical problems with character codes (but see section Further reading ). Rather, it gives background information needed for understanding what solutions there might be, what the different solutions do - and what's really the problem in the first place.
If you are looking for some quick help in using a large character repertoire in HTML authoring, see the document Using national and special characters in HTML .
Several technical terms related to character sets (e.g. glyph, encoding) can be difficult to understand, due to various confusions and due to having different names in different languages and contexts. The EuroDicAutom online database can be useful: it contains translations and definitions for several technical terms used here.
The basics
In computers and in data transmission between them, i.e. in digital data processing and transfer, data is internally presented as octets, as a rule. An octet is a small unit of data with a numerical value between 0 and 255, inclusively. The numerical values are presented in the normal (decimal) notation here, but notice that other presentations are used too, especially octal (base 8) or hexadecimal (base 16) notation. Octets are often called bytes , but in principle, octet is a more definite concept than byte . Internally, octets consist of eight bit s (hence the name, from Latin octo 'eight'), but we need not go into bit level here. However, you might need to know what the phrase "first bit set" or "sign bit set" means, since it is often used. In terms of numerical values of octets, it means that the value is greater than 127. In various contexts, such octets are sometimes interpreted as negative numbers, and this may cause various problems.
Different conventions can be established as regards to how an octet or a sequence of octets presents some data. For instance, four consecutive octets often form a unit that presents a real number according to a specific standard. We are here interested in the presentation of character data (or string data; a string is a sequence of characters) only.
In the simplest case, which is still widely used, one octet corresponds to one character according to some mapping table (encoding). Naturally, this allows at most 256 different characters being represented. There are several different encodings, such as the well-known ASCII encoding and the ISO Latin family of encodings. The correct interpretation and processing of character data of course requires knowledge about the encoding used. For HTML documents, such information should be sent by the Web server along with the document itself, using so-called HTTP headers (cf. to MIME headers ).
Previously the ASCII encoding was usually assumed by default (and it is still very common). Nowadays ISO Latin 1 , which can be regarded as an extension of ASCII , is often the default. The current trend is to avoid giving such a special position to ISO Latin 1 among the variety of encodings.
Definitions
The following definitions are not universally accepted and used. In fact, one of the greatest causes of confusion around character set issues is that terminology varies and is sometimes misleading.
- character repertoire
- A set of distinct characters. No specific internal presentation in computers or data transfer is assumed. The repertoire per se does not even define an ordering for the characters; ordering for sorting and other purposes is to be specified separately. A character repertoire is usually defined by specifying names of characters and a sample (or reference) presentation of characters in visible form. Notice that a character repertoire may contain characters which look the same in some presentations but are regarded as logically distinct, such as Latin uppercase A, Cyrillic uppercase A, and Greek uppercase alpha. For more about this, see a discussion of the character concept later in this document.
- character code
- A mapping, often presented in tabular form, which defines a one-to-one correspondence between characters in a character repertoire and a set of nonnegative integers. That is, it assigns a unique numerical code, a code position , to each character in the repertoire. In addition to being often presented as one or more tables, the code as a whole can be regarded as a single table and the code positions as indexes. As synonyms for "code position", the following terms are also in use: code number , code value , code element , code point , code set value - and just code . Note: The set of nonnegative integers corresponding to characters need not consist of consecutive numbers; in fact, most character codes have "holes", such as code positions reserved for control functions or for eventual future use to be defined later.
- character encoding
- A method (algorithm) for presenting characters in digital form by mapping sequences of code numbers of characters into sequences of octets . In the simplest case, each character is mapped to an integer in the range 0 - 255 according to a character code and these are used as such as octets. Naturally, this only works for character repertoire s with at most 256 characters. For larger sets, more complicated encodings are needed. Encodings have names, which can be registered .
Notice that a character code assumes or implicitly defines a character repertoire. A character encoding could, in principle, be viewed purely as a method of mapping a sequence of integers to a sequence of octets. However, quite often an encoding is specified in terms of a character code (and the implied character repertoire). The logical structure is still the following:
- A character repertoire specifies a collection of characters, such as "a", "!", and "ä".
- A character code defines numeric codes for characters in a repertoire. For example, in the ISO 10646 character code the numeric codes for "a", "!", "ä", and "‰" (per mille sign) are 97, 33, 228, and 8240. (Note: Especially the per mille sign, presenting 0 /00 as a single character, can be shown incorrectly on display or on paper. That would be an illustration of the symptoms of the problems we are discussing.)
- A character encoding defines how sequences of numeric codes are presented as (i.e., mapped to) sequences of octets. In one possible encoding for ISO 10646 , the string a!ä‰ is presented as the following sequence of octets (using two octets for each character): 0, 97, 0, 33, 0, 228, 32, 48.
For a more rigorous explanation of these basic concepts, see Unicode Technical Report #17: Character Encoding Model .
The phrase character set is used in a variety of meanings. It might denotes just a character repertoire but it may also refer to a character code, and quite often a particular character encoding is implied too.
Unfortunately the word charset is used to refer to an encoding, causing much confusion. It is even the official term to be used in several contexts by Internet protocols, in MIME headers.
Quite often the choice of a character repertoire, code, or encoding is presented as the choice of a language . For example, Web browsers typically confuse things quite a lot in this area. A pulldown menu in a program might be labeled "Languages", yet consist of character encoding choices (only). A language setting is quite distinct from character issues, although naturally each language has its own requirements on character repertoire. Even more seriously, programs and their documentation very often confuse the above-mentioned issues with the selection of a font .
Examples of character codes
Good old ASCII
The basics of ASCII
The name ASCII , originally an abbreviation for "American Standard Code for Information Interchange", denotes an old character repertoire , code , and encoding .
Most character codes currently in use contain ASCII as their subset in some sense. ASCII is the safest character repertoire to be used in data transfer. However, not even all ASCII characters are "safe"!
ASCII has been used and is used so widely that often the word ASCII refers to "text" or "plain text" in general, even if the character code is something else! The words "ASCII file" quite often mean any text file as opposite to a binary file.
The definition of ASCII also specifies a set of control codes ("control characters") such as linefeed (LF) and escape (ESC). But the character repertoire proper, consisting of the printable characters of ASCII, is the following (where the first item is the blank, or space, character):
! " # $ % & ' ( ) * + , - . / appearance of characters varies, of course, especially for some special characters. Some of the variation and other details are explained in The ISO Latin 1 character repertoire - a description with usage notes .
A formal view on ASCII
The character code
defined by the ASCII standard is the
following: code values are assigned to characters consecutively in the
order in which the characters are listed above (rowwise), starting from
32 (assigned to the blank) and ending up with 126 (assigned to the
tilde character
~
). Positions 0 through 31 and 127 are reserved for control codes
. They have standardized names and descriptions
, but in fact their usage varies a lot.
The character encoding specified by the ASCII standard is very simple, and the most obvious one for any character code where the code numbers do not exceed 255: each code number is presented as an octet with the same value.
Octets 128 - 255 are not used in ASCII. (This allows programs to use the first, most significant bit of an octet as a parity bit, for example.)
National variants of ASCII
There are several national variants of ASCII. In such variants, some special characters have been replaced by national letters (and other symbols). There is great variation here, and even within one country and for one language there might be different variants. The original ASCII is therefore often referred to as US-ASCII ; the formal standard (by ANSI ) is ANSI X3.4-1986 .
The phrase "original ASCII" is perhaps not quite adequate, since the creation of ASCII started in late 1950s, and several additions and modifications were made in the 1960s. The 1963 version had several unassigned code positions. The ANSI standard, where those positions were assigned, mainly to accommodate lower case letters, was approved in 1967/1968, later modified slightly. For the early history, including pre-ASCII character codes, see Steven J. Searle's A Brief History of Character Codes in North America, Europe, and East Asia and Tom Jennings' ASCII: American Standard Code for Information Infiltration . See also Jim Price 's ASCII Chart , Mary Brandel's 1963: ASCII Debuts , and the computer history documents , including the background and creation of ASCII, written by Bob Bemer , "father of ASCII".
The international standard ISO 646 defines a character set similar to US-ASCII but with code positions corresponding to US-ASCII characters @[/]{|} as "national use positions". It also gives some liberties with characters #$^`~ . The standard also defines "international reference version (IRV)", which is (in the 1991 edition of ISO 646) identical to US-ASCII. Ecma International has issued the ECMA-6 standard, which is equivalent to ISO 646 and is freely available on the Web.
Within the framework of ISO 646, and partly otherwise too, several "national variants of ASCII" have been defined, assigning different letters and symbols to the "national use" positions. Thus, the characters that appear in those positions - including those in US-ASCII - are somewhat "unsafe" in international data transfer, although this problem is losing significance. The trend is towards using the corresponding codes strictly for US-ASCII meanings; national characters are handled otherwise, giving them their own, unique and universal code positions in character codes larger than ASCII. But old software and devices may still reflect various "national variants of ASCII".
The following table lists ASCII characters which might be replaced by other characters in national variants of ASCII. (That is, the code positions of these US-ASCII characters might be occupied by other characters needed for national use.) The lists of characters appearing in national variants are not intended to be exhaustive, just typical examples .
Almost all of the characters used in the national variants have been incorporated into ISO Latin 1 . Systems that support ISO Latin 1 in principle may still reflect the use of national variants of ASCII in some details; for example, an ASCII character might get printed or displayed according to some national variant. Thus, even "plain ASCII text" is thereby not always portable from one system or application to another.
More information about national variants and their impact:
- Johan van Wingen : International standardization of 7-bit codes, ISO 646 ; contains a comparison table of national variants
- Digression on national 7-bit codes by Alan J. Flavell
- The ISO 646 page by Roman Czyborra
- Character tables by Koichi Yasuoka .
Subsets of ASCII for safety
Mainly due to the "national variants" discussed above, some characters are less "safe" than other, i.e. more often transferred or interpreted incorrectly.
In addition to the letters of the English alphabet ("A" to "Z", and "a" to "z"), the digits ("0" to "9") and the space (" "), only the following characters can be regarded as really "safe" in data transmission:
! " % & ' ( ) * + , - . / : ; < = > ?
Even these characters might eventually be interpreted wrongly by the recipient, e.g. by a human reader seeing a glyph for "&" as something else than what it is intended to denote, or by a program interpreting "<" as starting some special markup , "?" as being a so-called wildcard character, etc.
When you need to name things (e.g. files, variables, data fields, etc.), it is often best to use only the characters listed above, even if a wider character repertoire is possible. Naturally you need to take into account any additional restrictions imposed by the applicable syntax. For example, the rules of a programming language might restrict the character repertoire in identifier names to letters, digits and one or two other characters.
The misnomer "8-bit ASCII"
Sometimes the phrase "8-bit ASCII" is used. It follows from the discussion above that in reality ASCII is strictly and unambiguously a 7-bit code in the sense that all code positions are in the range 0 - 127.
It is a misnomer used to refer to various character codes which are extensions of ASCII in the following sense: the character repertoire contains ASCII as a subset, the code numbers are in the range 0 - 255, and the code numbers of ASCII characters equal their ASCII codes.
Another example: ISO Latin 1 alias ISO 8859-1
The ISO 8859-1 standard (which is part of the ISO 8859 family of standards) defines a character repertoire identified as "Latin alphabet No. 1", commonly called "ISO Latin 1", as well as a character code for it. The repertoire contains the ASCII repertoire as a subset, and the code numbers for those characters are the same as in ASCII. The standard also specifies an encoding , which is similar to that of ASCII: each code number is presented simply as one octet.
In addition to the ASCII characters, ISO Latin 1 contains various accented characters and other letters needed for writing languages of Western Europe, and some special characters. These characters occupy code positions 160 - 255, and they
Notes:
- The first of the characters above appears as space; it is the so-called no-break space .
- The presentation of some characters in copies of this document may be defective e.g. due to lack of font support. You may wish to compare the presentation of the characters on your browser with the character table presented as a GIF image in the famous ISO 8859 Alphabet Soup document. (In text only mode, you may wish to use my simple table of ISO Latin 1 which contains the names of the characters.)
- Naturally, the appearance of characters varies from one font to another.
See also: The ISO Latin 1 character repertoire - a description with usage notes , which presents detailed characterizations of the meanings of the characters and comments on their usage in various contexts.
More examples: the Windows character set(s)
In ISO 8859-1 , code positions 128 - 159 are explicitly reserved for control purposes ; they "correspond to bit combinations that do not represent graphic characters". The so-called Windows character set (WinLatin1, or Windows code page 1252 , to be exact) uses some of those positions for printable characters. Thus, the Windows character set is not identical with ISO 8859-1 . It is, however, true that the Windows character set is much more similar to ISO 8859-1 than the so-called DOS character sets are. The Windows character set is often called "ANSI character set", but this is seriously misleading. It has not been approved by ANSI . (Historical background: Microsoft based the design of the set on a draft for an ANSI standard. A glossary by Microsoft explicitly admits this.)
Note that programs used on Windows systems may use a DOS
character set; for example, if you create a text file using a Windows
program and then use the
type
command on DOS prompt to
see its content, strange things may happen, since the DOS command
interprets the data according to a DOS character code.
In the Windows character set, some positions in the range 128 - 159 are assigned to printable characters, such as "smart quotes", em dash, en dash, and trademark symbol. Thus, the character repertoire is larger than ISO Latin 1 . The use of octets in the range 128 - 159 in any data to be processed by a program that expects ISO 8859-1 encoded data is an error which might cause just anything. They might for example get ignored, or be processed in a manner which looks meaningful, or be interpreted as control characters . See my document On the use of some MS Windows characters in HTML for a discussion of the problems of using these characters.
The Windows character set exists in different variations, or "code pages"
(CP), which generally differ from the corresponding ISO 8859 standard
so that it contains same characters in positions 128 - 159 as code page
1252. (However, there are some more differences between ISO 8859-7 and win-1253 (WinGreek)
.) See Code page &Co.
by Roman Czyborra and Windows codepages
by Microsoft
. See also CP to Unicode mappings
. What we have discussed here is the most usual one, resembling ISO 8859-1. Its status in the officially IANA registry
was unclear; an encoding had been registered under the name
ISO-8859-1-Windows-3.1-Latin-1
by Hewlett-Packard (!), assumably intending to refer to WinLatin1, but in 1999-12 Microsoft finally registered
it under the name
windows-1252
. That name has in fact been widely used for it. (The name
cp-1252
has been used too, but it isn't officially registered even as an alias name.)
The ISO 8859 family
There are several character codes which are extensions to ASCII in the same sense as ISO 8859-1 and the Windows character set .
ISO 8859-1 itself is just a member of the ISO 8859 family of character codes, which is nicely overviewed in Roman Czyborra's famous document The ISO 8859 Alphabet Soup . The ISO 8859 codes extend the ASCII repertoire in different ways with different special characters (used in different languages and cultures). Just as ISO 8859-1 contains ASCII characters and a collection of characters needed in languages of western (and northern) Europe, there is ISO 8859-2 alias ISO Latin 2 constructed similarly for languages of central/eastern Europe, etc. The ISO 8859 character codes are isomorphic in the following sense: code positions 0 - 127 contain the same character as in ASCII, positions 128 - 159 are unused (reserved for control characters ), and positions 160 - 255 are the varying part, used differently in different members of the ISO 8859 family.
The ISO 8859 character codes are normally presented using the obvious encoding: each code position is presented as one octet. Such encodings have several alternative names in the official registry of character encodings , but the preferred ones are of the form ISO-8859-n .
Although ISO 8859-1 has been a de facto default encoding in many contexts, it has in principle no special role. ISO 8859-15 alias ISO Latin 9 (!) was expected to replace ISO 8859-1 to a great extent, since it contains the politically important symbol for euro , but it seems to have little practical use.
The following table lists the ISO 8859 alphabets, with links to more detailed descriptions. There is a separate document Coverage of European languages by ISO Latin alphabets which you might use to determine which (if any) of the alphabets are suitable for a document in a given language or combination of languages. My other material on ISO 8859 contains a combined character table, too.
Notes: ISO 8859-n is Latin alphabet no. n for n =1,2,3,4, but this correspondence is broken for the other Latin alphabets. ISO 8859-16 is for use in Albanian, Croatian, English, Finnish, French, German, Hungarian, Irish Gaelic (new orthography), Italian, Latin, Polish, Romanian, and Slovenian. In particular, it contains letters s and t with comma below, in order to address an issue of writing Romanian . See the ISO/IEC JTC 1/ SC 2 site for the current status and proposed changes to the ISO 8859 set of standards.
Other "extensions to ASCII"
In addition to the codes discussed above, there are other extensions to ASCII which utilize the code range 0 - 255 ("8-bit ASCII codes" ), such as
- DOS character codes , or "code pages" (CP)
- In MS DOS systems, different character codes are used; they are called "code pages". The original American code page was CP 437, which has e.g. some Greek letters, mathematical symbols, and characters which can be used as elements in simple pseudo-graphics. Later CP 850 became popular, since it contains letters needed for West European languages - largely the same letters as ISO 8859-1 , but in different code positions. See DOS code page to Unicode mapping tables for detailed information. Note that DOS code pages are quite different from Windows character codes , though the latter are sometimes called with names like
cp-1252(=
windows-1252)! For further confusion, Microsoft now prefers to use the notion "OEM code page" for the DOS character set used in a particular country.
- Macintosh character code
- On the Macs , the character code is more uniform than on PCs (although there are some national variants ). The Mac character repertoire is a mixed combination of ASCII, accented letters, mathematical symbols, and other ingredients. See section Text in Mac OS 8 and 9 Developer Documentation .
Notice that many of these are very different from ISO 8859-1. They may have different character repertoires, and the same character often has different code values in different codes. For example, code position 228 is occupied by ä (letter a with dieresis, or umlaut) in ISO 8859-1, by ð (Icelandic letter eth) in HP's Roman-8 , by õ (letter o with tilde) in DOS code page 850, and per mille sign (‰) in Macintosh character code.
For information about several code pages, see Code page &Co. by Roman Czyborra. See also his excellent description of various Cyrillic encodings , such as different variants of KOI-8; most of them are extensions to ASCII, too.
In general, full conversions between the character codes mentioned above are not possible. For example, the Macintosh character repertoire contains the Greek letter pi, which does not exist in ISO Latin 1 at all. Naturally, a text can be converted (by a simple program which uses a conversion table) from Macintosh character code to ISO 8859-1 if the text contains only those characters which belong to the ISO Latin 1 character repertoire. Text presented in Windows character code can be used as such as ISO 8859-1 encoded data if it contains only those characters which belong to the ISO Latin 1 character repertoire.
Other "8-bit codes"
All the character codes discussed above are "8-bit codes", eight bits are sufficient for presenting the code numbers and in practice the encoding (at least the normal encoding) is the obvious (trivial) one where each code position (thereby, each character) is presented as one octet (byte). This means that there are 256 code positions, but several positions are reserved for control codes or left unused (unassigned, undefined).
Although currently most "8-bit codes" are extensions to ASCII in the sense described above, this is just a practical matter caused by the widespread use of ASCII . It was practical to make the "lower halves" of the character codes the same, for several reasons.
The standards ISO 2022 and ISO 4873 define a general framework for 8-bit codes (and 7-bit codes) and for switching between them. One of the basic ideas is that code positions 128 - 159 (decimal) are reserved for use as control codes ("C1 controls"). Note that the Windows character sets do not comply with this principle.
To illustrate that other kinds of 8-bit codes can be defined than extensions to Ascii, we briefly consider the EBCDIC code, defined by IBM and once in widespread use on "mainframes " (and still in use). EBCDIC contains all ASCII characters but in quite different code positions . As an interesting detail, in EBCDIC normal letters A - Z do not all appear in consecutive code positions. EBCDIC exists in different national variants (cf. to variants of ASCII ). For more information on EBCDIC, see section IBM and EBCDIC in Johan W. van Wingen 's Character sets. Letters, tokens and codes. .
ISO 10646, UCS, and Unicode
ISO 10646, the standard
ISO 10646 (officially: ISO/IEC 10646) is an international standard, by ISO and IEC . It defines UCS, Universal Character Set, which is a very large and growing character repertoire , and a character code for it. Currently tens of thousands of characters have been defined, and new amendments are defined fairly often. It contains, among other things, all characters in the character repertoires discussed above. For a list of the character blocks in the repertoire, with examples of some of them, see the document UCS (ISO 10646, Unicode) character blocks .
The number of the standard intentionally reminds us of 646, the number of the ISO standard corresponding to ASCII .
Unicode, the more practical definition of UCS
Unicode is a standard , by the Unicode Consortium , which defines a character repertoire and character code intended to be fully compatible with ISO 10646, and an encoding for it. ISO 10646 is more general (abstract) in nature, whereas Unicode "imposes additional constraints on implementations to ensure that they treat characters uniformly across platforms and applications", as they say in section Unicode & ISO 10646 of the Unicode FAQ .
Unicode was originally designed to be a 16-bit code, but it was extended so that currently code positions are expressed as integers in the hexadecimal range 0..10FFFF (decimal 0..1 114 111). That space is divided into 16-bit "planes". Until recently, the use of Unicode has mostly been limited to "Basic Multilingual Plane (BMP)" consisting of the range 0..FFFF.
The ISO 10646 and Unicode character repertoire can be regarded as a superset of most character repertoires in use. However, the code positions of characters vary from one character code to another.
"Unicode" is the commonly used name
In practice, people usually talk about Unicode rather than ISO 10646, partly because we prefer names to numbers, partly because Unicode is more explicit about the meanings of characters, partly because detailed information about Unicode is available on the Web (see below).
Unicode version 1.0 used somewhat different names for some characters than ISO 10646. In Unicode version, 2.0, the names were made the same as in ISO 10646. New versions of Unicode are expected to add new characters mostly. Version 3.0 , with a total number of 49,194 characters (38,887 in version 2.1), was published in February 2000, and version 4.0 has 96,248 characters.
Until recently, the ISO 10646 standard had not been put onto the Web. It is now available as a large (80 megabytes) zipped PDF file via the Publicly Available Standards page of ISO/IEC JTC1. page. It is available in printed form from ISO member bodies . But for most practical purposes, the same information is in the Unicode standard.
General information about ISO 10646 and Unicode
For more information, see
- Unicode FAQ by the Unicode Consortium. It is fairly large but divided into sections rather logically, except that section Basic Questions would be better labeled as "Miscellaneous".
- Roman Czyborra's material on Unicode, such as Why do we need Unicode? and Unicode's characters
- Olle Järnefors: A short overview of ISO/IEC 10646 and Unicode . Very readable and informative, though somewhat outdated e.g. as regards to versions of Unicode . (It also contains a more detailed technical description of the UTF encodings than those given above.)
- Markus Kuhn : UTF-8 and Unicode FAQ for Unix/Linux . Contains helpful general explanations as well as practical implementation considerations.
- Steven J. Searle: A Brief History of Character Codes in North America, Europe, and East Asia . Contains a valuable historical review, including critical notes on the "unification" of Chinese, Japanese and Korean (CJK) characters.
- Alan Wood : Unicode and Multilingual Editors and Word Processors ; some software tools for actually writing Unicode; I'd especially recommend taking a look at the free UniPad editor (for Windows).
- Jukka K. Korpela: Unicode Explained . O’Reilly, 2006.
- Tony Graham: Unicode: A Primer . Wiley, 2000.
- Richard Gillam: Unicode Demystified: A Practical Programmer's Guide to the Encoding Standard . Addison-Wesley, 2002.
Reference information about ISO 10646 and Unicode
- Unicode 4.0 online : the standard itself, mostly in PDF format; it's partly hard to read, so you might benefit from my Guide to the Unicode standard , which briefly explains the structure of the standard and how to find information about a particular character there
- Unicode et ISO 10646 en français , the Unicode standard in French
- Unicode charts , containing names , code positions , and representative glyphs for the characters and notes on their usage. Available in PDF format, containing the same information as in the corresponding parts of the printed standard. (The charts were previously available in faster-access format too, as HTML documents containing the charts as GIF images. But this version seems to have been removed.)
- Unicode database , a large (over 460 000 octets ) plain text file listing Unicode character code positions , names , and defined character properties in a compact notation
- Informative annex E to ISO 10646-1:1993 (i.e., old version!), which lists, in alphabetic order, all character names (and the code positions ) except Hangul and CJK ideographs; useful for finding out the code position when you know the (right!) name of a character.
- An online character database by Indrek Hein at the Institute of the Estonian Language . You can e.g. search for Unicode characters by name or code position and get the Unicode equivalents of characters in many widely used character sets.
- How to find an &#number; notation for a character ; contains some additional information on how to find a Unicode number for a character
Encodings for Unicode
Originally, before extending the code range past 16 bits, the "native" Unicode encoding was UCS-2 , which presents each code number as two consecutive octets m and n so that the number equals 256m +n . This means, to express it in computer jargon, that the code number is presented as a two-byte integer . According to the Unicode consortium, the term UCS-2 should now be avoided, as it is associated with the 16-bit limitations.
UTF-32 encodes each code position as a 32-bit binary integer, i.e. as four octets. This is a very obvious and simple encoding. However, it is inefficient in terms of the number of octets needed. If we have normal English text or other text which contains ISO Latin 1 characters only, the length of the Unicode encoded octet sequence is four times the length of the string in ISO 8859-1 encoding. UTF-32 is rarely used, except perhaps in internal operations (since it is very simple for the purposes of string processing).
UTF-16 represents each code position in the Basic Multilingual Plane as two octets. Other code positions are presented using so-called surrogate pairs , utilizing some code positions in the BMP reserved for the purpose. This, too, is a very simple encoding when the data contains BMP characters only.
Unicode can be, and often is, encoded in other ways, too, such as the following encodings:
-.
- UTF-7
- Each character code is presented as a sequence of one or more octets in the range 0 - 127 ("bytes with most significant bit set to 0", or "seven-bit bytes", hence the name). Most ASCII characters are presented as such, each as one octet, but for obvious reasons some octet values must be reserved for use as "escape" octets, specifying the octet together with a certain number of subsequent octets forms a multi-octet encoded presentation of one character. There is an example of using UTF-7 later in this document.
IETF Policy on Character Sets and Languages (RFC 2277 ) clearly favors UTF-8 . It requires support to it in Internet protocols (and doesn't even mention UTF-7). Note that UTF-8 is efficient, if the data consists dominantly of ASCII characters with just a few "special characters" in addition to them, and reasonably efficient for dominantly ISO Latin 1 text.
Support to Unicode characters
The implementation of Unicode support is a long and mostly gradual process. Unicode can be supported by programs on any operating systems, although some systems may allow much easier implementation than others; this mainly depends on whether the system uses Unicode internally so that support to Unicode is "built-in". addition to international standards, there are company policies which define various subsets of the character repertoire. A practically important one is Microsoft's "Windows Glyph List 4" (WGL4) , or "PanEuropean" character set, characterized on Microsoft's page Character sets and codepages and excellently listed on page Using Special Characters from Windows Glyph List 4 (WGL4) in HTML by Alan Wood .
The
U+
nnnn
notation
Unicode characters are often referred to using a notation of the form
U+
nnnn
where nnnn
is a four-digit hexadecimal
notation of the code value. For example,
U+0020
means the space character (with code value 20 in hexadecimal, 32 in
decimal). Notice that such notations identify a character through its
Unicode code value, without referring to any particular encoding. There
are other ways to mention (identify) a character
, too.
More about the character concept
An "A" (or any other character) is something like a Platonic entity: it is the idea of an "A" and not the "A" itself.-- Michael E. Cohen: Text and Fonts in a Multi-lingual Cross-platform World .
The character concept is very fundamental for the issues discussed here but difficult to define exactly. The more fundamental concepts we use, the harder it is to give good definitions. (How would you define "life"? Or "structure"?) Here we will concentrate on clarifying the character concept by indicating what it does not imply.
The Unicode view
The Unicode standard describes characters as "the smallest components of written language that have semantic value", which is somewhat misleading. A character such as a letter can hardly be described as having a meaning (semantic value) in itself. Moreover, a character such as ú (letter u with acute accent), which belongs to Unicode, can often be regarded as consisting of smaller components: a letter and a diacritic . And in fact the very definition of the character concept in Unicode is the following:
abstract character : a unit of information used for the organization, control, or representation of textual data.
?)
Control characters (control codes)
The rôle of the so-called control characters in character codes is somewhat obscure. Character codes often contain code positions which are not assigned to any visible character but reserved for control purposes. For example, in communication between a terminal and a computer using the ASCII code, the computer could regard octet 3 as a request for terminating the currently running process. Some older character code standards contain explicit descriptions of such conventions whereas newer standards just reserve some positions for such usage, to be defined in separate standards or agreements such as "C0 controls" (tabulated in my document on ASCII control codes ) and "C1 controls" , or specifically ISO 6429 . And although the definition quoted above suggests that "control characters" might be regarded as characters in the Unicode terminology, perhaps it is more natural to regard them as control codes .
Control codes can be used for device control such as cursor movement, page eject, or changing colors. Quite often they are used in combination with codes for graphic characters, so that a device driver is expected to interpret the combination as a specific command and not display the graphic character(s) contained in it. For example, in the classical VT100 controls , ESC followed by the code corresponding to the letter "A" or something more complicated (depending on mode settings) moves the cursor up. To take a different example, the Emacs editor treats ESC A as a request to move to the beginning of a sentence. Note that the ESC control code is logically distinct from the ESC key in a keyboard, and many other things than pressing ESC might cause the ESC control code to be sent. Also note that phrases like "escape sequences" are often used to refer to things that don't involve ESC at all and operate at a quite different level. Bob Bemer , the inventor of ESC, has written a "vignette" about it: That Powerful ESCAPE Character -- Key and Sequences .
One possible form of device control is changing the way a device interprets the data (octets) that it receives. For example, a control code followed by some data in a specific format might be interpreted so that any subsequent octets to be interpreted according to a table identified in some specific way. This is often called "code page switching", and it means that control codes could be used change the character encoding . And it is then more logical to consider the control codes and associated data at the level of fundamental interpretation of data rather than direct device control. The international standard ISO 2022 defines powerful facilities for using different 8-bit character codes in a document.
Widely used formatting control codes include carriage return (CR), linefeed (LF), and horizontal tab (HT), which in ASCII occupy code positions 13, 10, and 9. The names (or abbreviations) suggest generic meanings, but the actual meanings are defined partly in each character code definition, partly - and more importantly - by various other conventions "above" the character level. The "formatting" codes might be seen as a special case of device control, in a sense, but more naturally, a CR or a LF or a CR LF pair (to mention the most common conventions) when used in a text file simply indicates a new line. As regards to control codes used for line structuring, see Unicode technical report #13 Unicode Newline Guidelines . See also my Unicode line breaking rules: explanations and criticism . The HT (TAB) character is often used for real "tabbing" to some predefined writing position. But it is also used e.g. for indicating data boundaries, without any particular presentational effect, for example in the widely used "tab separated values" (TSV ) data format.
A control code, or a "control character" cannot have a graphic presentation (a glyph
) in the same way as normal characters have. However, in Unicode
there is a separate block Control Pictures
which contains characters that can be used to indicate the presence of a control code.
They are of course quite distinct from the control codes they symbolize -
U+241B
symbol for escape
is not the same as
U+001B
escape
!
On the other hand, a control code might occasionally be displayed, by
some programs, in a visible form, perhaps describing the control action
rather than the code. For example, upon receiving octet 3 in the
example situation above, a program might echo back (onto the terminal) ***
or INTERRUPT
or ^C
. All such notations are program-specific conventions. Some control codes are sometimes named
in a manner which seems to bind them to characters. In particular,
control codes 1, 2, 3, ... are often called control-A, control-B,
control-C, etc. (or CTRL-A or C-A or whatever). This is associated with
the fact that on many keyboards, control codes can be produced (for
sending to a computer) using a special key labeled "Control" or "Ctrl"
or "CTR" or something like that together with letter keys A, B, C, ...
This in turn is related to the fact that the code numbers
of characters and control codes have been assigned so that the code of "Control-X
" is obtained from the code of the upper case letter X
by a simple operation (subtracting 64 decimal). But such things imply
no real relationships between letters and control codes. The control
code 3, or "Control-C", is not
a variant of letter C at all, and its meaning is not associated with the meaning of C.
A glyph - a visual appearance
It is important to distinguish the character concept from the glyph concept. A glyph is a presentation of a particular shape which a character may have when rendered or displayed. For example, the character Z might be presented as a boldface Z or as an italic Z , and it would still be a presentation of the same character. On the other hand, lower-case z is defined to be a separate character - which in turn may have different glyph presentations.
This is ultimately a matter of definition : a definition of a character repertoire specifies the "identity" of characters, among other things. One could define a repertoire where uppercase Z and lowercase z are just two glyphs for the same character. On the other hand, one could define that italic Z is a character different from normal Z, not just a different glyph for it. In fact, in Unicode for example there are several characters which could be regarded as typographic variants of letters only, but for various reasons Unicode defines them as separate characters. For example, mathematicians use a variant of letter N to denote the set of natural numbers (0, 1, 2, ...), and this variant is defined as being a separate character ("double-struck capital N") in Unicode. There are some more notes on the identity of characters below.
The design of glyphs has several aspects, both practical and esthetic. For an interesting review of a major company's description of its principles and practices, see Microsoft's Character design standards (in its typography pages ).
Some discussions, such as ISO 9541-1 and ISO/EC TR 15285 , make a further distinction between "glyph image ", which is an actual appearance of a glyph, and "glyph", which is a more abstract notion. In such an approach, "glyph" is close to the concept of "character", except that a glyph may present a combination of several characters. Thus, in that approach, the abstract characters "f" and "i" might be represented using an abstract glyph that combines the two characters into a ligature, which itself might have different physical manifestations. Such approaches need to be treated as different from the issue of treating ligatures as (compatibility) characters.
What's in a name?
The names of characters are assigned identifiers rather than definitions. Typically the names are selected so that they contain only letters A - Z, spaces, and hyphens; often uppercase variant is the reference spelling of a character name. (See naming guidelines of the UCS .) The same character may have different names in different definitions of character repertoires. Generally the name is intended to suggest a generic meaning and scope of use. But the Unicode standard warns (mentioning full stop as an example of a character with varying usage):
A character may have a broader range of use than the most literal interpretation of its name might indicate; coded representation, name, and representative glyph need to be taken in context when establishing the semantics of a character.
Glyph variation
When a character repertoire is defined (e.g. in a standard), some particular glyph is often used to describe the appearance of each character, but this should be taken as an example only. The Unicode standard specifically says (in section 3.2) that great variation is allowed between "representative glyph" appearing in the standard and a glyph used for the corresponding character:
Consistency with the representative glyph does not require that the images be identical or even graphically similar; rather, it means that both images are generally recognized to be representations of the same character. Representing the character U+0061 Latin small letter a by the glyph "X" would violate its character identity.
Thus, the definition of a repertoire is not a matter of just listing glyphs , but neither is it a matter of defining exactly the meanings of characters. It's actually an exception rather than a rule that a character repertoire definition explicitly says something about the meaning and use of a character.
Possibly some specific properties (e.g. being classified as a letter or having numeric value in the sense that digits have) are defined, as in the Unicode database , but such properties are rather general in nature.
This vagueness may sound irritating, and it often is. But an essential point to be noted is that quite a lot of information is implied . You are expected to deduce what the character is, using both the character name and its representative glyph, and perhaps context too, like the grouping of characters under different headings like "currency symbols".
For more information on the glyph concept, see the document An operational model for characters and glyphs (ISO/IEC TR 15285:1998) and Apple's document Characters, Glyphs, and Related Terms
Fonts
A repertoire of glyph s comprises a font . In a more technical sense, as the implementation of a font, a font is a numbered set of glyphs. The numbers correspond to code positions of the characters (presented by the glyphs). Thus, a font in that sense is character code dependent. An expression like "Unicode font" refers to such issues and does not imply that the font contains glyphs for all Unicode characters.
It is possible that a font which is used for the presentation of some character repertoire does not contain a different glyph for each character. For example, although characters such as Latin uppercase A, Cyrillic uppercase A, and Greek uppercase alpha are regarded as distinct characters (with distinct code values) in Unicode , a particular font might contain just one A which is used to present all of them. (For information about fonts, there is a very large comp.font FAQ , but it's rather old: last update in 1996. The Finding Fonts for Internationalization FAQ is dated, too.)
You should never use a character just because it "looks right" or "almost right". Characters with quite different purposes and meanings may well look similar, or almost similar, in some font s at least. Using a character as a surrogate for another for the sake of apparent similarity may lead to great confusion. Consider, for example, the so-called sharp s (es.
For some more explanations on this, see section Why should we be so strict about meanings of characters? in The ISO Latin 1 character repertoire - a description with usage notes .
Identity of characters: a matter of definition
The identity of characters is defined by the definition of a character repertoire . Thus, it is not an absolute concept but relative to the repertoire; some repertoire might contain a character with mixed usage while another defines distinct characters for the different uses. For instance, the ASCII repertoire has a character called hyphen . It is also used as a minus sign (as well as a substitute for a dash, since ASCII contains no dashes). Thus, that ASCII character is a generic, multipurpose character, and one can say that in ASCII hyphen and minus are identical. But in Unicode , there are distinct characters named "hyphen" and "minus sign" (as well as different dash characters). For compatibility, the old ASCII character is preserved in Unicode, too (in the old code position, with the name hyphen-minus ).
Similarly, as a matter of definition, Unicode defines characters for micro sign , n-ary product , etc., as distinct from the Greek letters (small mu, capital pi, etc.) they originate from. This is a logical distinction and does not necessarily imply that different glyphs are used. The distinction is important e.g. when textual data in digital form is processed by a program (which "sees" the code values, through some encoding, and not the glyphs at all). Notice that Unicode does not make any distinction e.g. between the greek small letter pi (π), and the mathematical symbol pi denoting the well-known constant 3.14159... (i.e. there is no separate symbol for the latter). For the ohm sign (Ω), there is a specific character (in the Symbols Area), but it is defined as being canonical equivalent to greek capital letter omega (Ω), i.e. there are two separate characters but they are equivalent). On the other hand, it makes a distinction between greek capital letter pi (Π) and the mathematical symbol n-ary product (∏), so that they are not equivalents.
If you think this doesn't sound quite logical, you are not the only one to think so. But the point is that for symbols resembling Greek letter and used in various contexts, there are three possibilities in Unicode:
- the symbol is regarded as identical to the Greek letter (just as its particular usage )
- the symbol is included as a separate character but only for compatibility and as compatibility equivalent to the Greek letter
- the symbol is regarded as a completely separate character.
You need to check the Unicode references for information about each individual symbol. Note in particular that a query to Indrek Hein's online character database will give such information in the decomposition info part (but only in the entries for compatibility characters!). As a rough rule of thumb about symbols looking like Greek letters, mathematical operators (like summation) exist as independent characters whereas symbols of quantities and units (like pi and ohm) are equivalent or identical to Greek letters.
Failures to display a character
In addition to the fact that the appearance of a character may vary
, it is quite possible that some program fails to display a character at all
.
Perhaps the program cannot interpret a particular way in which the
character is presented. The reason might simply be that some program-specific way
had been used to denote the character and a different program is in use
now. (This happens quite often even if "the same" program is used; for
example, Internet Explorer version 4.0 is able to recognize
α
as denoting the Greek letter alpha (α) but IE 3.0 is not and displays
the notation literally.) And naturally it often occurs that a program
does not recognize the basic character encoding
of the data, either because it was not properly informed about the
encoding according to which the data should be interpreted or because
it has not been programmed to handle the particular encoding in use.
But even if a program recognizes some data as denoting a character, it may well be unable to display it since it lacks a glyph for it. Often it will help if the user manually checks the font settings, perhaps manually trying to find a rich enough font. (Advanced programs could be expected to do this automatically and even to pick up glyphs from different fonts, but such expectations are mostly unrealistic at present.) But it's quite possible that no such font can be found. As an important detail, the possibility of seeing e.g. Greek characters on some Windows systems depends on whether "internationalization support" has been installed.
A well-design program will in some appropriate way indicate its inability to display a character. For example, a small rectangular box, the size of a character, could be used to indicate that there is a character which was recognized but cannot be displayed. Some programs use a question mark, but this is risky - how is the reader expected to distinguish such usage from the real "?" character?
Linear text vs. mathematical notations
Although several character repertoires
, most notably that of ISO 10646 and Unicode
, contain mathematical
and other symbols, the presentation of mathematical formulas
is essentially not a character level problem. At the character level, symbols like integration or n
-ary summation can be defined and their code positions
and encodings
defined, and representative glyphs
shown, and perhaps some usage notes given. But the construction of real
formulas, e.g. for a definite integral of a function, is a different
thing, no matter whether one considers formulas abstractly (how the
structure of the formula is given) or presentationally (how the formula
is displayed on paper or on screen). To mention just a few approaches
to such issues, the TeX
system is widely used by mathematicians to produce high-quality presentations of formulas, and MathML
is an ambitious project for creating a markup language for mathematics so that both structure and presentation can be handled.
In other respects, too, character standards usually deal with plain text only. Other structural or presentational aspects, such as font variation, are to be handled separately. However, there are characters which would now be considered as differing in font only but for historical reasons regarded as distinct.
Compatibility characters
There is a large number of compatibility characters in ISO 10646 and Unicode which are variants of other characters. They were included for compatibility with other standards so that data presented using some other code can be converted to ISO 10646 and back without losing information. The Unicode standard says (in section 2.4):
Compatibility characters are those that would not have been encoded except for compatibility and round-trip convertibility with other standards. They are variants of characters that already have encodings as normal (that is, non-compatibility) characters in the Unicode Standard.
There is a large number of compatibility characters in the Compatibility Area but also scattered around the Unicode space.
Many, but not all, compatibility characters have compatibility decompositions . The Unicode database contains, for each character, a field (the sixth one) which specifies its eventual compatibility decomposition.
Thus, to take a simple example, superscript two (²) is an ISO Latin 1 character with its own code position in that standard. In ISO 10646 way of thinking, it would have been treated as just a superscript variant of digit two . But since the character is contained in an important standard, it was included into ISO 10646, though only as a "compatibility character". The practical reason is that now one can convert from ISO Latin 1 to ISO 10646 and back and get the original data. This does not mean that in the ISO 10646 philosophy superscripting (or subscripting, italics, bolding etc.) would be irrelevant; rather, they are to be handled at another level of data presentation, such as some special markup .
There is a document titled Unicode in XML and other Markup Languages
and produced jointly by the World Wide Web Consortium (W3C
) and the Unicode Consortium. It discusses, among other things, characters with compatibility mappings
:
should they be used, or should the corresponding non-compatibility
characters be used, perhaps with some markup and/or style sheet that
corresponds to the difference between them. The answers depend on the
nature of the characters and the available markup and styling
techniques. For example, for superscripts, the use of
sup
markup (as in HTML) is recommended, i.e.
<sup>2</sup>
is preferred over sup2; This is a debatable issue; see my notes on
sup
and
sub
markup
.
The definition of Unicode indicates our sample character, superscript two , as a compatibility character with the compatibility decomposition "<super> + 0032 2". Here "<super>" is a semi-formal way of referring to what is considered as typographic variation, in this case superscript style, and "0032 2" shows the hexadecimal code of a character and the character itself.
Some
compatibility characters
have compatibility decompositions consisting of several characters. Due to this property, they can be said to represent ligatures
in the broad sense. For example, latin small ligature fi
(
U+FB01
)
has the obvious decomposition consisting of letters "f" and "i". It is
still a distinct character in Unicode, but in the spirit of Unicode
,
we should not use it except for storing and transmitting existing data
which contains that character. Generally, ligature issues should be
handled outside the character level, e.g. selected automatically by a
formatting program or indicated using some suitable markup
.
Note that the word
ligature can be misleading when it appears in a character name. In particular, the old name of the character "æ", latin small letter ae
(
U+00E6
), is latin small ligature ae
, but it is not
a ligature of "a" and "e" in the sense described above. It has no compatibility decomposition.
In comp.fonts FAQ, General Info (2/6) section 1.15 Ligatures , the term ligature is defined as follows:
A ligature occurs where two or more letterforms are written or printed as a unit. Generally, ligatures replace characters that occur next to each other when they share common components. Ligatures are a subset of a more general class of figures called "contextual forms."
Compositions and decompositions
A diacritic mark , i.e. an additional graphic such as an accent or cedilla attached to a character, can be treated in different ways when defining a character repertoire. See some historical notes on this in my description of ISO Latin 1 . It also explains why the so-called spacing diacritic marks are of very limited usefulness, except when taken into some secondary usage.
In the Unicode approach, there are separate characters called combining diacritical marks . The general idea is that you can express a vast set of characters with diacritics by representing them so that a base character is followed by one or more (!) combining (non-spacing) diacritic marks. And a program which displays such a construct is expected to do rather clever things in formatting, e.g. selecting a particular shape for the diacritic according to the shape of the base character. This requires Unicode support at implementation level 3. Most programs currently in use are totally incapable of doing anything meaningful with combining diacritic marks. But there is some simple support to them in Internet Explorer for example, though you would need a font which contains the combining diacritics (such as Arial Unicode MS); then IE can handle simple combinations reasonably. See test page for combining diacritic marks in Alan Wood's Unicode resources . Regarding advanced implementation of the rendering of characters with diacritic marks, consult Unicode Technical Note #2, A General Method for Rendering Combining Marks .
Using combining diacritic marks, we have wide range of
possibilities. We can put, say, a diaeresis on a gamma, although "Greek
small letter gamma with diaeresis" does not exist as a character
. The combination
U+03B3 U+0308
consists of two characters, although its visual presentation looks like
a single character in the same sense as "ä" looks like a single
character. This is how your browser displays the combination: "γ̈". In
most browsing situations at present, it probably isn't displayed
correctly; you might see e.g. the letter gamma followed by a box that
indicates a missing glyph, or you might see gamma followed by a
diaeresis shown separately (¨).
Thus, in practical terms, in order to use a character with a diacritic mark, you should primarily try to find it as a precomposed
character. A precomposed character, also called composite character
or decomposable character
, is one that has a code position
(and thereby identity
)
of its own but is in some sense equivalent to a sequence of other
characters. There are lots of them in Unicode, and they cover the needs
of most (but not all) languages of the world, but not e.g. the
presentation of the International phonetic alphabet
by IPA
which, in its general form, requires several different diacritic marks. For example, the character latin small letter a with diaeresis
(
U+00E4
, ä) is, by Unicode definition, decomposable to the sequence of the two characters latin small letter a
(
U+0061
) and combining diaeresis
(
U+0308
).
This is at present mostly a theoretic possibility. Generally by
decomposing all decomposable characters one could in many cases
simplify the processing of textual data (and the resulting data might
be converted back to a format using precomposed characters). See e.g.
the working draft Character Model for the World Wide Web
.
Typing characters
Just pressing a key?
Typing characters on a computer may appear deceptively simple: you press a key labeled "A", and the character "A" appears on the screen. Well, you actually get uppercase "A" or lowercase "a" depending on whether you used the shift key or not, but that's common knowledge. You also expect "A" to be included into a disk file when you save what you are typing, you expect "A" to appear on paper if you print your text, and you expect "A" to be sent if you send your product by E-mail or something like that. And you expect the recipient to see an "A".
Thus far, you should have learned that the presentation of a character in computer storage or disk or in data transfer may vary a lot. You have probably realized that especially if it's not the common "A" but something more special (say, an "A" with an accent), strange things might happen, especially if data is not accompanied with adequate information about its encoding .
But you might still be too confident. You probably expect that on your system at least things are simpler than that. If you use your very own very personal computer and press the key labeled "A" on its keyboard, then shouldn't it be evident that in its storage and processor, on its disk, on its screen it's invariably "A"? Can't you just ignore its internal character code and character encoding? Well, probably yes - with "A". I wouldn't be so sure about "Ä", for instance. (On Windows systems, for example, DOS mode programs differ from genuine Windows programs in this respect; they use a DOS character code .)
When you press a key on your keyboard , then what actually happens is this. The keyboard sends the code of a character to the processor. The processor then, in addition to storing the data internally somewhere, normally sends it to the display device. (For more details on this, as regards to one common situation, see Example: What Happens When You Press A Key in The PC Guide .) Now, the keyboard settings and the display settings might be different from what you expect. Even if a key is labeled "Ä", it might send something else than the code of "Ä" in the character code used in your computer. Similarly, the display device, upon receiving such a code, might be set to display something different. Such mismatches are usually undesirable, but they are definitely possible .
Moreover, there are often keyboard restrictions . If your computer uses internally, say, ISO Latin 1 character repertoire, you probably won't find keys for all 191 characters in it on your keyboard. And for Unicode , it would be quite impossible to have a key for each character! Different keyboards are used, often according to the needs of particular languages. For example, keyboards used in Sweden often have a key for the å character but seldom a key for ñ ; in Spain the opposite is true. Quite often some keys have multiple uses via various "composition" keys, as explained below. For an illustration of the variation, as well as to see what layout might be used in some environments, see
- International Keyboards at Terena (contains some errors)
- Keyboard layouts by HermesSOFT
- Alternative Keyboard Layouts at USCC
- Keyboard layouts documented by Mark Leisher ; contains several layouts for "exotic" languages too
- The interactive Windows Layouts page by Microsoft ; requires Internet Explorer with JavaScript enabled. (Actually, using it I found out new features in the Finnish keyboard I have: I can use Alt Gr m to produce the micro sign µ, although there is no hint about this in the "m" key itself.)
In several systems, including MS Windows, it is possible to switch between different keyboard settings. This means that the effects of different keys do not necessarily correspond to the engravings in the key caps but to some other assignments. To ease typing in such situations, "virtual keyboards" can be used. This means that an image of a keyboard is visible on the screen, letting the user type characters by clicking on keys in it or using the information to see the current assignments of the keys of the physical keyboard. For the Office software on Windows systems, there is a free add-in available for this: Microsoft Visual Keyboard .
Program-specific methods for typing characters
Thus, you often need program-specific ways of entering characters from a keyboard, either because there is no key for a character you need or there is but it does not work (properly). The program involved might be part of system software, or it might be an application program. Three important examples of such ways:
- On Windows systems, you can (usually - some application programs may override this) produce any character in the Windows character set (naturally, in its Windows encoding) as follows: Press down the (left) Alt key and keep it down. Then type, using the separate numeric keypad (not the numbers above the letter keys!), the four-digit code of the character in decimal. Finally release the Alt key. Notice that the first digit is always 0, since the code values are in the range 32 - 255 (decimal). For instance, to produce the letter "Ä" (which has code 196 in decimal), you would press Alt down, type 0196 and then release Alt. Upon releasing Alt, the character should appear on the screen. In MS Word, the method works only if Num Lock is set. This method is often referred to as Alt-0nnn . (If you omit the leading zero, i.e. use Alt-nnn , the effect is different , since that way you insert the character in code position nnn in the DOS character code ! For example, Alt-196 would probably insert a graphic character which looks somewhat like a hyphen. There are variations in the behavior of various Windows programs in this area, and using those DOS codes is best avoided.)
- In the Emacs editor (which is popular especially on Unix systems), you can produce any ISO Latin 1 character by typing first control-Q, then its code as a three-digit octal number. To produce "Ä", you would thus type control-Q followed by the three digits 304 (and expect the "Ä" character to appear on screen). This method is often referred to as C-Q-nnn . (There are other ways of entering many ISO Latin 1 characters in Emacs , too.)
- Text processing programs often modify user input e.g. so that when you have typed the three characters "(", "c", and ")", the program changes, both internally and visibly, that string to the single character "©". This is often convenient, especially if you can add your own rules for modifications, but it causes unpleasant surprises and problems when you actually meant what you wrote, e.g. wanted to write letter "c" in parentheses.
- Programs often process some keyboard key combinations , typically involving the use of an Alt or Alt Gr key or some other "composition key", by converting them to special characters. In fact, even the well-known shift key is a composition key: it is used to modify the meaning of another key, e.g. by changing a letter to uppercase or turning a digit key to a special character key. Such things are not just "program-specific"; they also depend on the program version and settings (and on the keyboard, of course), and could well be user-modifiable. For example, in order to support the euro sign , various methods have been developed, e.g. by Microsoft so that pressing the "e" key while keeping the Alt Gr key pressed down might produce the euro sign - in some encoding ! But this may require a special "euro update", and the key combinations vary even when we consider Microsoft products only. So it would be quite inappropriate to say e.g. "to type the euro, use AltGr+e" as general, unqualified advice.
The "Alt" and "Alt Gr" keys mentioned above are not present on all keyboards, and often they both carry the text "Alt" but they can be functionally different! Typically, those keys are on the left and on the right of the space bar. It depends on the physical keyboard what the key cap texts are, and it depends on the keyboard settings whether the keys have the same effect or different effects. The name "Alt Gr" for "right Alt" is short for "alternate graphic", and it's mostly used to create additional characters, whereas (left) "Alt" is typically used for keyboard access to menus.
The last method above could often be called "device dependent" rather than program specific, since the program that performs the conversion might be a keyboard driver . In that case, normal programs would have all their input from the keyboard processed that way. This method may also involve the use of auxiliary keys for typing characters with diacritic marks such as "á ". Such an auxiliary key is often called dead key , since just pressing it causes nothing; it works only in combination with some other key. A more official name for a dead key is modifier key . For example, depending on the keyboard and the driver, you might be able to produce "á" by pressing first a key labeled with the acute accent (´), then the "a" key.
My keyboard has two keys for such purposes. There's the accent key, with the acute accent and the grave accent (`) as "upper case" character, meaning I need to use the shift key for the grave. And there's a key with the dieresis (¨) and the circumflex (^) above it (i.e. as "upper case") and the tilde (~) below or left to it (meaning I need to use Alt Gr for it), so I can produce ISO Latin 1 characters with those diacritics. Note that this does not involve any operation on the characters ´`¨^~, and the keyboard does not send those characters at all in such situations. If I try to enter that way a character outside the ISO Latin 1 repertoire, I get just the diacritic as a separate character followed by the normal character, e.g. "^j". To enter the diacritic itself, such as the tilde (~) , I may need to press the space bar so that the tilde diacritic combines with the blank (producing ~) instead of a letter (producing e.g. "ã"). Your situation may well be different, in part or entirely. For example, a typical French keyboard has separate keys for those accented letters that are used in French (e.g. "à"), but the accents themselves can be difficult to produce. You might need to type AltGr è followed by a space to produce the grave accent `.
"Escape" notations ("meta notations") for characters
It is often possible to use various "escape" notations for characters. This rather vague term means notations which are afterwards converted to (or just displayed as) characters according to some specific rules by some programs. They depend on the markup, programming, or other language (in a broad but technical meaning for "language", so that data formats can be included but human languages are excluded). If different languages have similar conventions in this respect, a language designer may have picked up a notation from an existing language, or it might be a coincidence.
The phrase "escape notations" or even "escapes" for short is rather widespread, and it reflects the general idea of escaping from the limitations of a character repertoire or device or protocol or something else. So it's used here, although a name like meta notations might be better. It is any case essential to distinguish these notations from the use of the ESC (escape) control code in ASCII and other character codes.
Examples:
- In the PostScript language, characters have names , such as
Adieresisfor Ä , which can be used to denote them according to certain rules.
- In the RTF data format, the notation
/'c4is used to denote Ä .
- In TeX systems, there are different ways of producing characters, possibly depending on the "packages" used. Examples of ways to produce Ä :
/"A,
/symbol{196},
/char'0304,
/capitaldieresis{A}(for a large list, consult The Comprehensive LaTex Symbol List
- In the HTML language one can use the notation
Äfor the character Ä . In the official HTML terminology, such notations are called entity references (denoting characters) . It depends on HTML version which entities are defined, and it depends on a browser which entities are actually supported .
- In HTML, one can also use the notation
Äfor the character Ä . Generally, in any SGML based system, or "SGML application" as the jargon goes, a numeric character reference (or, actually, just character references ) of the form
&#number
;can be used, and it refers to the character which is in code position n in the character code defined for the "SGML application" in question. This is actually very simple: you specify a character by its index (position, number). But in SGML terminology, the character code which determines the interpretation of
&#number
;is called, quite confusingly, the document character set. For HTML, the "document character set" is ISO 10646 (or, to be exact, a subset thereof, depending on HTML version). A most essential point is that for HTML, the "document character set" is completely independent of the encoding of the document! (See Alan J. Flavell 's Notes on Internationalization .) The so-called character entity references like
Äin HTML can be regarded as symbolic names defined for some numeric character references. In XML, character references use ISO 10646 by language definition. Although both entity and character references are markup , to be used in markup languages, they often replaced by the corresponding characters, when a user types text on an Internet discussion forum. This might be a conscious decision by the forum designer, but quite often it is caused unintentionally.
- In CSS , you can present a character as
"/n
- In the C programming language , one can usually write
/0304to denote Ä within a string constant, although this makes the program character code dependent.
As you can see, the notations typically involve some (semi-)mnemonic name or the code number
of the character, in some number system
. (The ISO 8859-1
code number for our example character Ä
is 196 in decimal, 304 in octal, C4 in hexadecimal.) And there is some
method of indicating that the letters or digits are not to be taken as
such but as part of a special notation denoting a character. Often some
specific character such as the backslash /
is used as an "escape character". This implies that such a character
cannot be used as such in the language or format but must itself be
"escaped"; for example, to include the backslash itself into a string
constant in C, you need to write it twice (
//
).
In cases like these, the character itself does not occur in a file (such as an HTML document or a C source program). Instead, the file contains the "escape" notation as a character sequence, which will then be interpreted in a specific way by programs like a Web browser or a C compiler. One can in a sense regard the "escape notations" as encodings used in specific contexts upon specific agreements.
There are also "escape notations" which are to be interpreted by human readers directly. For example, when sending E-mail one might use A" (letter A followed by a quotation mark) as a surrogate for Ä (letter A with dieresis), or one might use AE instead of Ä. The reader is assumed to understand that e.g. A" on display actually means Ä. Quite often the purpose is to use ASCII characters only, so that the typing, transmission, and display of the characters is "safe". But this typically means that text becomes very messy; the Finnish word Hämäläinen does not look too good or readable when written as Ha"ma"la"inen or Haemaelaeinen . Such usage is based on special (though often implicit) conventions and can cause a lot of confusion when there is no mutual agreement on the conventions, especially because there are so many of them. (For example, to denote letter a with acute accent, á, a convention might use the apostrophe, a', or the solidus, a/, or the acute accent, a´, or something else.)
There is an old proposal by K. Simonsen, Character Mnemonics & Character Sets , published as RFC 1345 , which lists a large number of "escape notations" for characters. They are very short, typically two characters, e.g. A: for Ä and th for þ (thorn). Naturally there's the problem that the reader must know whether e.g. th is to be understood that way or as two letters t and h. So the system is primarily for referring to characters (see below), but under suitable circumstances it could also be used for actually writing texts, when the ambiguities can somehow be removed by additional conventions or by context. RFC 1345 cannot be regarded as official or widely known, but if you need, for some applications, an "escape scheme", you might consider using those notations instead of reinventing the wheel.
How to mention (identify) a character
There are also various ways to identify a character when it cannot be used as such or when the appearance of a character is not sufficient identification. This might be regarded as a variant of the "escape notations for human readers" discussed above, but the pragmatic view is different here. We are not primarily interested in using characters in running text but in specifying which character is being discussed.
For example, when discussing the Cyrillic letter that resembles the Latin letter E (and may have an identical or very similar glyph , and is transliterated as E according to ISO 9 ), there are various options:
- "Cyrillic E"; this is probably intuitively understandable in this case, and can be seen as referring either to the similarity of shape or to the transliteration equivalence; but in the general case these interpretations do not coincide, and the method is otherwise vague too
- "
U+0415"; this is a unique identification but requires the reader to know the idea of
U+nnnn notations
- "cyrillic capital letter ie " (using the official Unicode name ) or "cyrillic IE" (using an abridged version); one problem with this is that the names can be long even if simplified, and they still cannot be assumed to be universally known even by people who recognize the character
- "KE02", which uses the special notation system defined in ISO 7350 ; the system uses a compact notation and is marginally mnemonic (K = kirillica 'Cyrillics'; the numeric codes indicate small/capital letter variation and the use of diacritics )
- any of the "escape" notations discussed above, such as "
E=" by RFC 1345 or "
Е" in HTML; this can be quite adequate in a context where the reader can be assumed to be familiar with the particular notation.
Information about encoding
The need for information about encoding
It is hopefully obvious from the preceding discussion that a sequence of octets can be interpreted in a multitude of ways when processed as character data. By looking at the octet sequence only, you cannot even know whether each octet presents one character or just part of a two-octet presentation of a character, or something more complicated. Sometimes one can guess the encoding, but data processing and transfer shouldn't be guesswork.
Naturally, a sequence of octets could be intended to present other than character data, too. It could be an image in a bitmap format, or a computer program in binary form, or numeric data in the internal format used in computers.
This problem can be handled in different ways in different systems when data is stored and processed within one computer system. For data transmission , a platform-independent method of specifying the general format and the encoding and other relevant information is needed. Such methods exist, although they not always used widely enough. People still send each other data without specifying the encoding, and this may cause a lot of harm. Attaching a human-readable note, such as a few words of explanation in an E-mail message body, is better than nothing. But since data is processed by programs which cannot understand such notes, the encoding should be specified in a standardized computer-readable form.
The MIME solution
Media types
Internet media types
, often called MIME media types
, can be used to specify a major media type ("top level media type", such as
text
), a subtype (such as
html
), and an encoding (such as
iso-8859-1
). They were originally developed to allow sending other than plain ASCII
data by E-mail. They can be (and should be) used for specifying the
encoding when data is sent over a network, e.g. by E-mail or using the HTTP
protocol on the World Wide Web.
The media type concept is defined in RFC 2046
. The procedure for registering types in given in RFC 2048
; according to it, the registry is kept by IANA
at
but it has in fact been moved to
Character encoding ("charset") information
The technical term used to denote a character encoding in the Internet media type context is "character set", abbreviated "charset". This has caused a lot of confusion, since "set" can easily be understood as repertoire !
Specifically, when data is sent in MIME format, the media type and
encoding are specified in a manner illustrated by the following example:
Content-Type: text/html; charset=iso-8859-1
This specifies, in addition to saying that the media type is
text
and subtype is
html
, that the character encoding is ISO 8859-1
.
The official registry of "charset" (i.e., character encoding) names,
with references to documents defining their meanings, is kept by IANA
at
(According to the documentation of the registration procedure, RFC 2978 , it should be elsewhere, but it has been moved.) I have composed a tabular presentation of the registry , ordered alphabetically by "charset" name and accompanied with some hypertext references.
Several character encodings have alternate (alias) names in the registry. For example, the basic (ISO 646) variant of ASCII can be called "ASCII" or "ANSI_X3.4-1968" or "cp367" (plus a few other names); the preferred name in MIME context is, according to the registry, "US-ASCII". Similarly, ISO 8859-1 has several names, the preferred MIME name being "ISO-8859-1". The "native" encoding for Unicode, UCS-2 , is named "ISO-10646-UCS-2" there.
MIME headers
The
Content-Type
information is an example of information in a header
.
Headers relate to some data, describing its presentation and other
things, but are passed as logically separate from it. Possible headers
and their contents are defined in the basic MIME specification
, RFC 2045
.
Adequate headers should normally be generated automatically by the
software which sends the data (such as a program for sending E-mail, or
a Web server) and interpreted automatically by receiving software (such
as a program for reading E-mail, or a Web browser). In E-mail messages,
headers precede the message body; it depends on the E-mail program
whether and how it displays the headers. For Web documents, a Web
server is required to send headers when it delivers a document to a
browser (or other user agent) which has sent a request for the
document.
In addition to media types and character encodings, MIME addresses several other aspects too. Earl Hood has composed the documentation Multipurpose Internet Mail Extensions MIME , which contains the basic RFCs on MIME in hypertext format and a common table of contents for them.
An auxiliary encoding: Quoted-Printable (QP)
The MIME specification defines, among many other things, the general purpose "Quoted-Printable" (QP) encoding which can be used to present any sequence of octets as a sequence of such octets which correspond to ASCII characters. This implies that the sequence of octets becomes longer, and if it is read as an ASCII string, it can be incomprehensible to humans. But what is gained is robustness in data transfer, since the encoding uses only "safe" ASCII characters which will most probably get through any component in the transfer unmodified.
Basically, QP encoding means that most octets smaller than 128 are
used as such, whereas larger octets and some of the small ones are
presented as follows: octet n
is presented as a sequence of three octets, corresponding to ASCII codes for the
=
sign and the two digits of the hexadecimal
notation of n
. If QP encoding is applied to a sequence of octets presenting character data according to ISO 8859-1
character code, then effectively this means that most ASCII characters
(including all ASCII letters) are preserved as such whereas e.g. the
ISO 8859-1 character ä
(code position 228 in decimal, E4 in hexadecimal) is encoded as
=E4
. (For obvious reasons, the equals sign
=
itself is among the few ASCII characters which are encoded. Being in
code position 61 in decimal, 3D in hexadecimal, it is encoded as
=3D
.)
Notice that encoding ISO 8859-1 data this way means that the character code is the one specified by the ISO 8859-1 standard, whereas the character encoding is different from the one specified (or at least suggested) in that standard. Since QP only specifies the mapping of a sequence of octets to another sequence of octets, it is a pure encoding and can be applied to any character data, or to any data for that matter.
Naturally, Quoted-Printable encoding needs to be processed by a program which knows it and can convert it to human-readable form. It looks rather confusing when displayed as such. Roughly speaking, one can expect most E-mail programs to be able to handle QP, but the same does not apply to newsreaders (or Web browsers). Therefore, you should normally use QP in E-mail only.
How MIME should work in practice
Basically, MIME should let people communicate smoothly without hindrances caused by character code and encoding differences. MIME should handle the necessary conversions automatically and invisibly.
For example, when person A sends E-mail to person B , the following should happen: The E-mail program used by A encodes A 's message in some particular manner, probably according to some convention which is normal on the system where the program is used (such as ISO 8859-1 encoding on a typical modern Unix system). The program automatically includes information about this encoding into an E-mail header, which is usually invisible both when sending and when reading the message. The message, with the headers, is then delivered, through network connections, to B 's system. When B uses his E-mail program (which may be very different from A 's) to read the message, the program should automatically pick up the information about the encoding as specified in a header and interpret the message body according to it. For example, if B is using a Macintosh computer, the program would automatically convert the message into Mac's internal character encoding and only then display it. Thus, if the message was ISO 8859-1 encoded and contained the Ä (upper case A with dieresis) character, encoded as octet 196, the E-mail program used on the Mac should use a conversion table to map this to octet 128, which is the encoding for Ä on Mac. (If the program fails to do such a conversion, strange things will happen. ASCII characters would be displayed correctly, since they have the same codes in both encodings, but instead of Ä, the character corresponding to octet 196 in Mac encoding would appear - a symbol which looks like f in italics.)
Problems with implementations - examples
Unfortunately, there are deficiencies and errors in software so that users often have to struggle with character code conversion problems, perhaps correcting the actions taken by programs. It takes two to tango, and some more participants to get characters right. This section demonstrates different things which may happen, and do happen, when just one component is faulty, i.e. when MIME is not used or is inadequately supported by some "partner" (software involved in entering, storing, transferring, and displaying character data).
Typical minor (!) problems which may occur in communication in Western European languages other than English is that most characters get interpreted and displayed correctly but some "national letters" don't. For example, character repertoire needed in German, Swedish, and Finnish is essentially ASCII plus a few letters like "ä" from the rest of ISO Latin 1 . If a text in such a language is processed so that a necessary conversion is not applied, or an incorrect conversion is applied, the result might be that e.g. the word "später" becomes "spter" or "spÌter" or "spdter" or "sp=E4ter".
Sometimes you might be able to guess
what has happened,
and perhaps to determine which code conversion should be applied, and
apply it more or less "by hand". To take an example (which may have
some practical value in itself to people using languages mentioned)
Assume that you have some text data which is expected to be, say, in
German, Swedish or Finnish and which appears to be such text with some
characters replaced by oddities in a somewhat systematic way. Locate
some words which probably should contain the letter "ä"
but have something strange in place of it (see examples above). Assume
further that the program you are using interprets text data according
to ISO 8859-1
by default and that the actual data is not accompanied with a suitable indication (like a
Content-Type
header) of the encoding, or such an indication is obviously in error. Now, looking at what appears instead of "ä", we might guess:
To illustrate what may happen when text is sent in a grossly invalid form
,
consider the following example. I'm sending myself E-mail, using
Netscape 4.0 (on Windows 95). In the mail composition window, I set the
encoding to UTF-8
. The body of my message is simply
Tämä on testi.
(That's Finnish for 'This is a test'. The second and fourth character is letter a with umlaut.) Trying to read the mail on my Unix account, using the Pine E-mail program (popular among Unix users), I see the following (when in "full headers" mode; irrelevant headers omitted here):
X-Mailer: Mozilla 4.0 [en] (Win95; I)
MIME-Version: 1.0
To: jkorpela@cs.tut.fi
Subject: Test
X-Priority: 3 (Normal)
Content-Type: text/plain; charset=x-UNICODE-2-0-UTF-7
Content-Transfer-Encoding: 7bit
[The following text is in the "x-UNICODE-2-0-UTF-7" character set]
[Your display is set for the "ISO-8859-1" character set]
[Some characters may be displayed incorrectly]
T+O6Q- on testi.
Interesting, isn't it? I specifically requested UTF-8
encoding, but Netscape used UTF-7. And it did not include a correct header, since
x-UNICODE-2-0-UTF-7
is not a registered "charset" name
.
Even if the encoding had been a registered one, there would have been
no guarantee that my E-mail program would have been able to handle the
encoding. The example, "T+O6Q-" instead of "Tämä", illustrates what may
happen when an octet sequence is interpreted according to another
encoding than the intended one. In fact, it is difficult to say what
Netscape was really doing, since it seems to encode incorrectly.
A correct UTF-7 encoding for "Tämä" would be "T+AOQ-m+AOQ-". The "+" and "-" characters correspond to octets indicating a switch to "shifted encoding" and back from it. The shifted encoding is based on presenting Unicode values first as 16-bit binary integers, then regrouping the bits and presenting the resulting six-bit groups as octets according to a table specified in RFC 2045 in the section on Base64. See also RFC 2152 .
Practical conclusions
Whenever text data is sent over a network, the sender and the recipient should have a joint agreement on the character encoding used. In the optimal case, this is handled by the software automatically, but in reality the users need to take some precautions.
Most importantly, make sure that any Internet-related software that you use to send data specifies the encoding correctly in suitable headers. There are two things involved: the header must be there and it must reflect the actual encoding used; and the encoding used must be one that is widely understood by the (potential) recipients' software. One must often make compromises as regards to the latter aim: you may need to use an encoding which is not yet widely supported to get your message through at all.
It is useful to find out how to make your Web browser, newsreader,
and E-mail program so that you can display the encoding information for
the page, article, or message you are reading. (For example, on
Netscape use
View Page Info
; on News Xpress, use
View Raw Format
; on Pine, use
h
.)
If you use, say, Netscape to send E-mail or to post to Usenet news, make sure it sends the message in a reasonable form. In particular, make sure it does not send the message as HTML or duplicate it by sending it both as plain text and as HTML (select plain text only). As regards to character encoding, make sure it is something widely understood, such as ASCII , some ISO 8859 encoding, or UTF-8 , depending on how large character repertoire you need.
In particular, avoid sending data in a proprietary encoding (like the Macintosh encoding or a DOS encoding ) to a public network. At the very least, if you do that, make sure that the message heading specifies the encoding! There's nothing wrong with using such an encoding within a single computer or in data transfer between similar computers. But when sent to Internet, data should be converted to a more widely known encoding, by the sending program. If you cannot find a way to configure your program to do that, get another program.
As regards to other forms of transfer of data in digital form, such as diskettes, information about encoding is important, too. The problem is typically handled by guesswork. Often the crucial thing is to know which program was used to generate the data, since the text data might be inside a file in, say, the MS Word format which can only be read by (a suitable version of) MS Word or by a program which knows its internal data format. That format, once recognized, might contain information which specifies the character encoding used in the text data included; or it might not, in which case one has to ask the sender, or make a guess, or use trial and error - viewing the data using different encodings until something sensible appears.
Further reading
- The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets by Joel on Software. An enjoyable nice treatise, though probably not quite the absolute minimum.
- Character Encodings Concepts , adapted from a presentation by Peter Edberg at a Unicode conference. Old, but a rich source of information, with good illustrations.
- ISO-8859 briefing and resources by Alan J. Flavell . Partly a character set tutorial, partly a discussion of specific (especially ISO 8859 and HTML related) issues in depth.
- Section Character set standards in the Standards and Specifications List by Diffuse (archive copy)
- Guide to Character Sets , by Diffuse. (archive copy)
- Google 's section on internalization , which has interesting entries like i18nGurus
- "Character Set" Considered Harmful by Dan Connolly . A good discussion of the basic concepts and misconceptions.
- The Nature of Linguistic Data: Multilingual Computing - an old (1997) collection of annotated links to information on character codes, fonts, etc.
- John Clews: Digital Language Access: Scripts, Transliteration, and Computer Access ; an introduction to scripts and transliteration, so it's useful background information for character code issues.
- Michael Everson's Web site , which contains a lot of links to detailed documents on character code issues, especially progress and proposals in standardization.
- Johan W. van Wingen : Character sets. Letters, tokens and codes. Detailed information on many topics (including particular character codes).
- Steven J. Searle: A Brief History of Character Codes in North America, Europe, and East Asia
- Ken Lunde : CJKV Information Processing . A book on Chinese, Japanese, Korean & Vietnamese Computing. The book itself is not online, but some extracts are, e.g. the overview chapter.
- An online character database by Indrek Hein at the Institute of the Estonian Language . You can e.g. search for Unicode characters by name or code position, get lists of differences between some character sets, and get lists of characters needed for different languages.
- Free recode is a free program by François Pinard . It can be used to perform various character code conversions between a large number of encodings.
Character code problems are part of a topic called internationalization (jocularly abbreviated as i18n ), rather misleadingly, because it mainly revolves around the problems of using various languages and writing systems (scripts) . (Typically international communication on the Internet is carried out in English !) It includes difficult questions like text directionality (some languages are written right to left) and requirements to present the same character with different glyphs according to its context. See W3C pages on internationalization .
I originally started writing this document as a tutorial for HTML authors . Later I noticed that this general information is extensive enough to be put into a document of its own. As regards to HTML specific problems, the document Using national and special characters in HTML summarizes what currently seems to be the best alternative in the general case.
Acknowledgements
I have learned a lot about character set issues from the following people (listed in an order which is roughly chronological by the start of their influence on my understanding of these things): Timo Kiravuo , Alan J. Flavell , Arjun Ray , Roman Czyborra, Bob Bemer , Erkki I. Kolehmainen . (But any errors in this document I souped up by myself.) | http://blog.csdn.net/mmllkkjj/article/details/6138422 | CC-MAIN-2017-43 | en | refinedweb |
Dear Friends ,
I am developing one header file for implementation of stack. My library is containing only two functions push and pop.I am confuse is it good for program to declare structure in header file ? I want your answer with detailed description of reasons.
I urge you to give your best comments on my code.Any single problem you think can happen kindly share it here. :tntworth: :tntworth: :tntworth: :tntworth: :tntworth: :tntworth: :tntworth:
Below code is of "stack.h". This file contains declaration of functions and variables used threw out the program.
Following is the code of "stack.c" this code contains the definitions of the functions.Following is the code of "stack.c" this code contains the definitions of the functions.Code:#ifndef STACK_H_INCLUDED #define STACK_H_INCLUDED #include<stdio.h> #include<stdlib.h> struct Stack { int value; struct Stack *previous; }; extern struct Stack *stack; extern void push(int value); extern int pop(void); #endif // STACK_H_INCLUDED
Code:#include<stdio.h> #include<stdlib.h> #include "stack.h" // my header file. struct Stack *stack=NULL; void push(int value) { struct Stack *temprary = malloc(sizeof(struct Stack)); if(stack == NULL) { stack = malloc(sizeof(struct Stack)); stack->previous = NULL; } else { temprary->previous = stack; stack = temprary; } stack->value = value; } int pop(void) { int value; struct Stack *temprary; if(stack == NULL) { printf("\nSorry, The stack is empty.\n"); return 0; } else if(stack->previous == NULL) { value = stack->value; stack = NULL ; return value; } else { value = stack->value; temprary = stack; stack = stack->previous; free(temprary); } return value; } | http://forums.devshed.com/programming-42/designing-stack-header-file-945836.html | CC-MAIN-2017-43 | en | refinedweb |
This reference provides cmdlet descriptions and syntax for all Windows Deployment Services cmdlets. It lists the cmdlets in alphabetical order based on the verb at the beginning of the cmdlet.
Adds an existing driver package to a driver group or injects it into a boot image.
Approves clients.
Copies install images within an image group.
Denies approval for clients.
Disables a boot image.
Disables a driver package in the Windows Deployment Services driver store.
Disables an install image.
Disconnects a multicast client from a transmission or namespace.
Enables a boot image.
Enables a driver package in the Windows Deployment Services driver store.
Enables an install image.
Exports an existing boot image from an image store.
Exports an existing install image from an image store.
Gets properties of boot images from the image store.
Gets client devices from the pending device database, or pre-staged devices from Active Directory or the stand-alone server device database.
Gets properties of driver packages from the Windows Deployment Services driver store.
Gets properties of install images from an image store.
Gets properties of install image groups.
Gets a list of clients connected to a multicast transmission or namespace.
Imports a boot image to the image store.
Imports a driver package into the Windows Deployment Services driver store.
Imports an install image to an image store.
Creates a pre-staged client.
Creates an install image group.
Removes a boot image from the image store.
Removes a pre-staged client from AD DS or the stand-alone server device database, or clears the Pending Devices database.
Removes a driver package from a driver group or removes it from all driver groups and deletes it.
Removes an install image from an image store.
Removes an install image group.
Modifies settings of a boot image.
Modifies a pre-staged client device.
Modifies the properties of an install image.
Modifies the name and access permissions of an install image group. | https://docs.microsoft.com/en-us/powershell/module/wds/index?view=win10-ps | CC-MAIN-2017-43 | en | refinedweb |
WI-21563 (Feature)
Possibility to extend Array Index completion
WI-4025 (Usability Problem)
PHP: code completion does not suggest namespace identifier in expressions other than 'new'
WI-19515 (Usability Problem)
Exclude current key on array key autocompletion
WI-12473 (Bug)
PHP: completion doesn't auto-complete params/variables names in use statement of the closure
WI-10924 (Bug)
The IDE does not suggest the class with the same name but from the different namespace
WI-21739 (Bug)
Namespaces are shown in completion list for goto labels
WI-21806 (Bug)
Namespaces are shown in completion inside string literal
WI-21569 (Bug)
Don't complete keywords after namespace reference
WI-16317 (Bug)
Empty class is shown in completion if invalid class name exists
WI-21705 (Bug)
Backslash prepended to standard functions inside namespace.
WI-4591 (Bug)
'final' keyword completion inside interface
WI-18202 (Bug)
use keyword isn't in the list of completion items inside class and trait
WI-21328 (Usability Problem)
Missing indent for method call after variable name
WI-20318 (Bug)
For extends/implements usual indent is using instead of continuation indent
WI-21533 (Cosmetics)
Annotator bug: several "No content allowed before namespace declaration" warnings on one line
WI-21133 (Bug)
throw Exception::factory() will confuse parser and cause validation errors.
WI-11472 (Bug)
Missing break statement: Cannot suppress PhpMissingBreakStatementInspection with stacked case statements.
WI-17631 (Bug)
Unused private method: incorrect warning if such private method is used directly inside double-quoted string
WI-6140 (Usability Problem)
Rename: User friendly class rename refactoring: Renaming namespace "Namespace" to "OtherNamespace\Sub" renames it to "Sub" only.
WI-21548 (Usability Problem)
Move Namespace: Cancelling Moved Related Namespaces dialog doesn't cancel movement of namespace in the initial file
WI-21585 (Usability Problem)
Move class: trigger error about already existing class although refactoring doesn't change anything
WI-21594 (Usability Problem)
Cancel constructor inserting inserts code anyway
WI-19241 (Usability Problem)
Extract Interface: undo need to be performed twice if interface in another namespace
WI-21555 (Bug)
Extract Interface: creates inconsistent PHPDoc for @property
WI-21557 (Bug)
Move Class: Doesn't change occurrences of class when moving from global namespace
WI-21558 (Bug)
Move class: change appearance of class name during move from global namespace to global namespace
WI-20908 (Bug)
Rename inheritors doesn't take into account namespace
WI-21547 (Bug)
Move Namespace: default directory is invalid for Related Namespace if the file is located in the root of the project
WI-17319 (Bug)
Class move: Shoud search for noncode usages
WI-21334 (Exception)
Framework Integration setting does not work
WI-8653 (Feature)
Links in PhpDoc not clickable in editor (@link)
WI-20512 (Feature)
ENT_HTML401 constant is shown as known even with PHP5.3 language level
WI-19110 (Usability Problem)
Complete Statement badly influenced by syntax error
WI-20781 (Usability Problem)
Find PHP Method usages Panel presentation need improvements
WI-21632 (Cosmetics)
Wrong constructor parameter auto-complete hint
WI-17558 (Bug)
"Mix braced namespace declarations with braceless namespace declarations" is not highlighted
WI-13866 (Bug)
Find for implemented/overrided methods is not supported
WI-17404 (Bug)
@method doesn't use namespace
WI-21521 (Bug)
File and Code Templates: Wrong field PhpDoc autocompletion for one line template
WI-21490 (Bug)
Unused private method: warning if such private method is used only as a callable parameter
WI-16094 (Bug)
Copy Reference: inserts invalid $ sign to constant
WI-20690 (Bug)
Find usages of base class works differently for abstract and usual class
WI-21487 (Bug)
PHP inspections stuck due to @mixin loop
WI-21512 (Bug)
Convert namespace to braced insert additional "{" for mixed namespaces
WI-21516 (Bug)
Function declaration current parameter highlight is wrong
WI-21475 (Bug)
Convert to short syntax array literal breaks the code
WI-21198 (Bug)
Wrong signature for alias function mysql_numrows
WI-20201 (Bug)
Incomplete docs for settype()
WI-21610 (Bug)
Missing parameter in json_encode() stub
WI-18346 (Bug)
date_diff() invalid return value in PHPDoc
WI-10522 (Bug)
"pg_fetch_result" with two parameters is reported as incorrect
WI-21675 (Bug)
Missing return definition for MongoCollection::distinct()
WI-21603 (Bug)
Smarty parser stucks
WI-21690 (Bug)
Smarty: ldelim & rdelim not recognized by inspection
WI-21478 (Bug)
Smarty editor broken
IDEA-81277 (Feature)
Show Constraints for Data Sources
IDEA-118159 (Feature)
"Search Everywhere" inconsistent api on ItemPresentation
IDEA-74428 (Feature)
Provide UI for changing log settings
IDEA-113344 (Usability Problem)
An easy way to exit from full screen mode using mouse
IDEA-78206 (Bug)
Constructing 'mailto' link -> cannot resolve file 'mailto'7555 (Bug)
Search everywhere dialog is being closed immediately
IDEA-117831 (Bug)
After deleting last live template in a group, I can't click OK.
IDEA-118718 (Bug)
Occasionally seeing "Low disk space on a IntelliJ IDEA system directory partition"
IDEA-116071 (Bug)
Field can be final inspection change
IDEA-118587 (Bug)
IDEA may not exit with black window
WEB-10225 (Bug)
Injected HTML: goto CSS declaration does not see the other fragments.
IDEA-117649 (Bug)
"Locate Duplicates" action seems to not work with "Class Hierarchy" custom scope.
IDEA-116006 (Usability Problem)
Eclipse code style import: import the same xml does not pick up changes until manual synchronization
IDEA-114952 (Usability Problem)
Eclipse code style import: would be nice to remember imported file location
IDEA-96723 (Bug)
Java Rearranger deletes blank lines in field declarations
IDEA-116940 (Bug)
@formatter:off still generating braces
IDEA-118264 (Bug)
Rearrange Entries stopped working in Intellij 13
IDEA-94950 (Exception)
Code Style | Arrangement: AIOOBE at ArrangementMatchingRulesModel.removeRow() on removing the last rule that is in edit mode
IDEA-104706 (Usability Problem)
Remove currently active file from "Recent Files" popup
WEB-7251 (Bug)
renaming a variable with @ in coffeescript is not allowed
WEB-9497 (Bug)
CoffeeScript: Incorrect end of line expected in extends clause
WEB-10065 (Bug)
Good code is red
WEB-9888 (Bug)
Good coffeescript marked red
WEB-8867 (Bug)
CoffeeScript: Unused warnings
WEB-9899 (Bug)
CoffeeScript parser incorrectly determines function parameter count
WEB-9964 (Exception)
CoffeeScript: AssertionError on editing expression with injected RegExp
IDEA-100672 (Bug)
Artifact not updated on build
RUBY-14423 (Bug)
"Find Usages" in a cucumber step definition no longer shows the cucumber step as matching
IDEA-96081 (Feature)
Navigation bar could work for database objects, table editor, database console
IDEA-117701 (Feature)
Database: Ability to interactively change the password after login
IDEA-116280 (Feature)
Set cursor at postgresql error position
IDEA-117925 (Usability Problem)
Unsaved data sources changes are not considered for some actions
IDEA-118326 (Usability Problem)
Assign Data Sources | Data Source row should indicate user can change it
IDEA-117199 (Bug)
Database console: incorrect icons for consoles when trying to run a .sql file
IDEA-118705 (Bug)
Database: navigation to 'Referencing Rows Only' doesn't show the filter criteria that was used
IDEA-117976 (Bug)
Database table editor: Enter should start editing of value in selected cell
IDEA-117991 (Bug)
When refreshing datasources tree show databases before refreshing all the items
IDEA-110464 (Bug)
IDEA SQL plugin cannot connect to remote database over VPN
IDEA-119501 (Bug)
View editor operation fails for DB2 tables
IDEA-119313 (Bug)
Copy path problem
IDEA-108516 (Bug)
Unable to rename schema of DDL Data Source in Database panel
IDEA-119129 (Bug)
Use alternative table scanning method when user has no read access to all databases
IDEA-118283 (Bug)
Database: "Rename table" for MySQL tables causes exception
IDEA-119119 (Bug)
MySQL views with subquery in definition fail to open
IDEA-117802 (Bug)
DDL data source in Database tab doesn't support multiple schemas
IDEA-117670 (Bug)
Database: TableEditor invalid navigate via foreign key
IDEA-119245 (Exception)
Database console: IOOBE at SegmentArray.findSegmentIndex()
WEB-9566 (Feature)
Debugger: optionally ignore certificate errors
IDEA-105253 (Cosmetics)
Missing icon for Thread dumps view
WEB-10283 (Bug)
JS chrome Debug: fails with Nesting problem when changing value in debugger
WEB-10360 (Bug)
Javascript debugger with non unique file names
PY-6095 (Bug)
Accepting a completion with TAB results in a syntax error
RUBY-14617 (Bug)
do[enter] inserts DocumentError instead of a new line
IDEA-117327 (Task)
Add a setting to switch off autopopup completion item selection by Enter
IDEA-115727 (Bug)
Cyclic Expand Word leaves word highlighted
IDEA-117511 (Bug)
Hippie completion not working as expected in Intellij 13
IDEA-57940 (Bug)
Cyclic expand word should take into account all open files
IDEA-113684 (Bug)
Soft wraps insert additional spaces
IDEA-117127 (Exception)
Editor: Throwable on Select word at caret inside plain text
IDEA-23831 (Bug)
highlight usages: problems with split files
IDEA-118742 (Performance Problem)
UI Hang during search
IDEA-104735 (Cosmetics)
Dracula: INVALID string have not dracula style red color in Find Occurance tool window.
IDEA-119153 (Bug)
file search too wide for users folder
IDEA-97930 (Bug)
Idea 12: Find Usages (Alt-F7) always searches in libraries by default, disregards the Scope setting
WEB-8262 (Bug)
Comment with line/block comment STILL doesn't work on HTML in ASP file
WEB-2229 (Bug)
Html with strict DTD doesn't end the img tag properly
IDEA-118292 (Usability Problem)
Confusing custom config/system path configuration in idea.properties (${idea.config.path} is not expanded)
IDEA-118111 (Bug)
Can't close IDEA 13 (Ubuntu linux)
IDEA-118330 (Bug)
IDE hangs
IDEA-118763 (Bug)
Can't start IDEA after deinstalling a plugin
IDEA-119470 (Bug)
File and code templates: changes gone when switching tabs
IDEA-118211 (Performance Problem)
Performance problem when closing project
WI-21150 (Usability Problem)
CREATE UNIQUE INDEX should be detected as SQL injected language
WI-13685 (Bug)
PhpStorm doesn't save project name
IDEA-94683 (Bug)
Completion popup loses focus when viewing documentation (sometimes, almost always)
IDEA-117883 (Cosmetics)
Inspection descriptions talk about "the list/checkbox below"
WEB-7157 (Bug)
Variable to which the result of 'require' call is assigned is not highlighted as unused
WEB-10171 (Bug)
Usage of unintialized variable not reported
WEB-6700 (Bug)
TODOs not recognized on multiple level language template
WEB-6911 (Bug)
Mysteriously missed Debug file in JavaScript Library
WEB-8170 (Bug)
Code completion issue with NodeJS and module.exports
WEB-6168 (Bug)
ExtJS: external documentation for ExtJS 4.1 doesn't work
WEB-849 (Bug)
"Comment with line comment" on empty line in <script> block generates HTML instead of JS comment
WEB-10532 (Bug)
IntelliJ IDEA 13 Freezes editing JavaScript
WEB-9817 (Bug)
Node.js: global functions defined as 'global['name']' not resolved
WEB-7553 (Bug)
Incorrect indentation with chained calls
IDEA-79522 (Bug)
need ability to set display names for xml attribute and xml tag language injections
IDEA-111535 (Bug)
Edit Language Fragment: Synchronization is broken after tab drag
IDEA-119619 (Bug)
Settings / Language Injections: project level XML tag injection loses Sub-Tags value on IDE restart
WEB-10309 (Bug)
Stepping through node debugging session fails with sourcemapped files if built files are excluded from workspace (repo steps and example project)
WEB-9517 (Bug)
Npm: Error loading package list
IDEA-118446 (Bug)
Installation and plugin update (patch) download ignores Settings / HTTP Proxy
WI-21673 (Cosmetics)
Typo in the FTP warning message
WI-21460 (Bug)
Any In place server mapping with '/' in Web Path is replaced with mapping with project root in local path
WI-21662 (Bug)
Drush output parsing broken on null options description
IDEA-118250 (Usability Problem)
IntelliJ thinks intentional new directory names are filenames and tries to default them to files
WEB-9845 (Bug)
REST-Tool: Save Request to .xml
IDEA-93034 (Usability Problem)
SQL: MySQL: erasing the first backtick could erase the pair
IDEA-50739 (Usability Problem)
SQL: Insert Values Inspection: do not warn (optionally?) if absent arguments can be inserted due to DEFAULT clauses
IDEA-119260 (Usability Problem)
PostgreSQL: ALTER ROLE/DATABASE SET search_path not parsed correctly
IDEA-116149 (Usability Problem)
PostgreSQL: Missing column alias when subquery uses CAST or ::-46068 (Bug)
SQLite: REINDEX with collation name is yellow
IDEA-118076 (Bug)
rename alias in SQL console surrounds alias with quotes
IDEA-116905 (Bug)
PostgreSQL: window function "min" has errors
IDEA-117208 (Bug)
MySQL reformat fails to convert null literal to upercase
IDEA-117850 (Bug)
Code Style > SQL > New Line before - Join Condition does not work when unchecked
IDEA-117092 (Bug)
Submit MySQL query stucks with special comment
IDEA-117606 (Bug)
PostgreSQL: references to user are not resolved
IDEA-116407 (Bug)
Oracle callable <statement> expected false positive
IDEA-118573 (Bug)
Oracle "DROP INDEX" marked as syntax error
IDEA-113174 (Bug)
Oracle SQL: support INSERT INTO in prepared statements
IDEA-119582 (Bug)
Oracle database "create type body" is parsed incorrectly under "Oracle SQL*Plus" dialect
IDEA-119258 (Bug)
PostgreSQL: HEADER keyword in COPY highlighted as error
IDEA-119653 (Bug)
Database plugin marks blob column definition as error on hsqldb
IDEA-117431 (Bug)
Identifier quotation will incorrectly quote MySQL variables
IDEA-104127 (Bug)
Good code is red: using parameters to a stored procedure as values on a limit clause
IDEA-119321 (Bug)
PostgreSQL: OFFSET is allowed before LIMIT
IDEA-117899 (Bug)
SQL: column scope is not determined correctly
IDEA-57415 (Bug)
SQL: HSQLDB: quoted names are resolved ignoring case
IDEA-117313 (Bug)
Oracle syntax problem
IDEA-119105 (Bug)
MySQL lowercase functions are not resolved
IDEA-119193 (Bug)
DB2 validator does not understand the "DEFINITION ONLY" clause
IDEA-119290 (Bug)
PostgreSQL 9.3: DROP/ALTER MATERIALIZED VIEW not supported
IDEA-117129 (Bug)
Bad indent of brace in MySQL JOIN
WEB-10058 (Bug)
Typescript reference
WEB-10082 (Bug)
TypeScript: Type problems with namespaces
WEB-10387 (Bug)
Mocha console log statements are not correctly aligned to their encasing tests
IDEA-119445 (Usability Problem)
Remove first slash in "copy reference"
IDEA-118616 (Bug)
Lens mode with tool windows on the right side
IDEA-117772 (Bug)
Deadlock IDEA 13
IDEA-118004 (Cosmetics)
Find's Regex Help Popup table header bad color with darcula
PY-11687 (Feature)
RFE: Pass Shell Variables to Vagrant
PY-9994 (Feature)
Multiple Vagrant configurations in Vagrantfile
PY-9869 (Feature)
Cannot execute "vagrant provision" from PyCharm
PY-9854 (Feature)
Cannot start Vagrant with --provider=vmware_fusion
PY-10756 (Usability Problem)
Vagrant icons don't show when actions are added to the toolbar
PY-11755 (Usability Problem)
Vagrant: loading settings takes significant amout of time if there are some plugins installed
PY-8995 (Usability Problem)
Vagrant: propose to add vagrant box when there is no one yet configured
PY-11493 (Cosmetics)
Vagrant: up icon is too small compared to the rest of the icons
PY-11699 (Bug)
no vagrant box in settings
PY-11750 (Bug)
Vagrant: Init: project selection popup is not closed after selection
PY-11691 (Exception)
Exception on every Vagrant settings open
PY-11751 (Exception)
Vagrant: Init: Access is allowed from event dispatch thread only. Throwable at com.intellij.openapi.application.impl.ApplicationImpl.a
IDEA-115594 (Usability Problem)
'Commit Changes' dialog joins two (or more) previous commit messages
IDEA-115901 (Usability Problem)
VCS-Log: Save view selection on refresh
IDEA-116242 (Usability Problem)
Allow multiple user selection in the user filter of new VCS Log
IDEA-116834 (Performance Problem)
Moving through the list of filtered commits is slow
IDEA-117680 (Bug)
Changes from 2 selected commits aren't merged, 2 files with the same name are shown in the right part of panel
IDEA-119247 (Bug)
Git log filtered results should be requested from Git pre-sorted by --date-order
IDEA-116718 (Bug)
Git Log: Moving selection skips some commits | http://confluence.jetbrains.com/display/phpstorm/phpstorm+7.1.1+release+notes | CC-MAIN-2017-43 | en | refinedweb |
The following steps will explain how to introduce resilience to your application using the SAP S/4HANA Cloud SDK. If you want to follow this tutorial, we highly recommend checking out the previous parts of this series. For a complete overview visit the SAP S/4HANA Cloud SDK Overview.
Goal of this Blog Post
This blog post covers the following steps:
- Explain what resilience is and why you should care about it
- Make the call to the OData Service resilient by using Hystrix-based commands
- Write Tests for the new Hystrix-based command
- Deploy the application on SAP Cloud Platform Cloud Foundry
Resiliencemin * (1 – 0.9999) = 4.32min).
Now assume failures are cascading, so one service being unavailable means the whole application becomes unavailable. Given the equation used above, the situation now looks like this:
43200min * (1 – 0.9999^30) = 43200min * (1 – 0.997) = 129.6min.
Hystrix
The SAP S/4HANA allowsstrix to perform remote service calls asynchronously and concurrently. These threads are non-container-managed, so regardless of how many threads are used by your Hystrix commands, they do not interfere with your runtime container.
- Circuit breaker: Hystrix uses the circuit breaker pattern to determine whether a remote service is currently available. Breakers are closed by default. If a remote service call fails too many times, Hystrix will open/trip the breaker. This means that any further calls that should be made to the same remote service, are automatically stopped. Hystrix will periodically check if the service is available again, and open the closed.
Make your OData call resilient
Now that we have covered why resilience is important and how Hystrix can help us achieve resilience, it’s finally time to introduce it into our application. In the last tutorial we created a simple servlet that uses the SDK’s OData abstractions to retrieve costcenters from an ERP system. In order to make this call resilient, we have to wrap it in an ErpCommand. So first we will create the following class:
./application/src/main/java/com/sap/cloud/sdk/tutorial/GetCostCentersCommand.java
package com.sap.cloud.sdk.tutorial; import org.slf4j.Logger; import java.util.List; import java.util.Collections; import com.netflix.hystrix.HystrixThreadPoolProperties; import com.sap.cloud.sdk.cloudplatform.logging.CloudLoggerFactory; import com.sap.cloud.sdk.frameworks.hystrix.HystrixUtil; import com.sap.cloud.sdk.odatav2.connectivity.ODataQueryBuilder; import com.sap.cloud.sdk.s4hana.connectivity.ErpCommand; import com.sap.cloud.sdk.s4hana.connectivity.ErpConfigContext; public class GetCostCentersCommand extends ErpCommand<List<CostCenterDetails>> { private static final Logger logger = CloudLoggerFactory.getLogger(GetCostCentersCommand.class); protected GetCostCentersCommand( final ErpConfigContext configContext ) { super(GetCostCentersCommand.class, configContext); } @Override protected List<CostCenterDetails> run() throws Exception { final List<CostCenterDetails> costCenters = ODataQueryBuilder .withEntity("/sap/opu/odata/sap/FCO_PI_COST_CENTER", "CostCenterCollection") .select("CostCenterID", "Status", "CompanyCode", "Category", "CostCenterDescription") .build() .execute(getConfigContext()) .asList(CostCenterDetails.class); return costCenters; } @Override protected List<CostCenterDetails> getFallback() { return Collections.emptyList(); } }
The GetCostCentersCommand class inherits from ErpCommand, which is the SDK’s abstraction to provide easy to use Hystrix commands. To implement a valid ErpCommand we need to do two things: first, we need to provide a constructor. Here we simply add a constructor that takes an ErpConfigContext as parameter. Second, we need to override the run() method. As you might have noticed already, we can simply use the. We could also serve static data or check wetherCostCentersCommand( final ErpConfigContext configContext ) { super( HystrixUtil .getDefaultErpCommandSetter( GetCostCentersCommand.class, HystrixUtil.getDefaultErpCommandProperties().withExecutionTimeoutInMilliseconds(10000)) .andThreadPoolPropertiesDefaults(HystrixThreadPoolProperties.Setter().withCoreSize(20)), configContext); }
Now that we have a working command, we need to adapt our CostCenterServlet:
./application/src/main/java/com/sap/cloud/sdk/tutorial/CostCenterS.s4hana.connectivity.ErpConfigContext; @WebServlet("/costcenters") public class CostCenterServlet extends HttpServlet { private static final long serialVersionUID = 1L; private static final Logger logger = CloudLoggerFactory.getLogger(CostCenterServlet.class); @Override protected void doGet( final HttpServletRequest request, final HttpServletResponse response ) throws ServletException, IOException { final ErpConfigContext configContext = new ErpConfigContext(); final List<CostCenterDetails> result = new GetCostCentersCommand(configContext).execute(); response.setContentType("application/json"); response.getWriter().write(new Gson().toJson(result)); } }
As in the last blog post, the first thing we do is initializing an ErpConfigContext. But thanks to our new GetCostCentersCommand, we can now simply create a new command, provide it with the ErpConfigContext and execute. As before, we get a list of cost centers as result. But now we can be sure, that our application will not fail if the OData service is temporarily unavailable.
Write Tests for the Hystrix command
There are two things we need to address in order to properly test our code: we need to provide our tests with an ERP endpoint and a Hystrix request context.
ERP destination
If you run your application on CloudFoundry, the SDK can simply read the ERP destinations according to the configuration provided when deploying the configuration. resources directory (i.e., integration-tests/src/test/resources). MockUtil will read these files and provide your tests with the ERP destinations accordingly. Adapt the
URL as before.
{ "erp": { "default": "ERP_TEST_SYSTEM", "systems": [ { "alias": "ERP_TEST_SYSTEM", "uri": "" } ] } }
In addition, you may provide a credentials.yml file to reuse your SAP S/4HANA login configuration as described in the Appendix of tutorial Step 4 with SAP S/4HANA Cloud SDK: Calling an OData Service.
To do this.
MockUtil is our friend once again. Using MockUtil’s requestContextExecutor() method we can wrap the execution of the the GetCostCentersCommand in a request context.
Now let’s have a look at the code, to be placed in a file integration-tests/src/test/java/com/sap/cloud/sdk/tutorial/GetCostCentersCommandTest.java:
package com.sap.cloud.sdk.tutorial; import org.junit.BeforeClass; import org.junit.Test; import java.util.Collections; import java.util.List; import java.util.Locale; import com.sap.cloud.sdk.cloudplatform.servlet.Executable; import com.sap.cloud.sdk.s4hana.connectivity.ErpConfigContext; import com.sap.cloud.sdk.s4hana.connectivity.ErpDestination; import com.sap.cloud.sdk.s4hana.serialization.SapClient; import com.sap.cloud.sdk.testutil.MockUtil; import static org.assertj.core.api.Assertions.assertThat; public class GetCostCentersCommandTest { private static final MockUtil mockUtil = new MockUtil(); @BeforeClass public static void beforeClass() { mockUtil.mockDefaults(); mockUtil.mockErpDestination(); } private List<CostCenterDetails> getCostCenters(final String destination, final SapClient sapClient) { final ErpConfigContext configContext = new ErpConfigContext(destination, sapClient, Locale.ENGLISH); return new GetCostCentersCommand(configContext).execute(); } @Test public void testWithSuccess() throws Exception { mockUtil.requestContextExecutor().execute(new Executable() { @Override public void execute() throws Exception { assertThat(getCostCenters(ErpDestination.getDefaultName(), mockUtil.getErpSystem().getSapClient())).isNotEmpty(); } }); } @Test public void testWithFallback() throws Exception { mockUtil.requestContextExecutor().execute(new Executable() { @Override public void execute() throws Exception { assertThat(getCostCenters("NoErpSystem", mockUtil.getErpSystem().getSapClient())).isEqualTo(Collections.emptyList()); } }); } }
We use JUnit’s @BeforeClass annotation to setup our mockUtils and to mock the ERP destinations. correctly provide the default ERP destination information using mockUtil. For the sake of simplicity we simply assert that the response is not empty.
For testWithFallback(), we intentionally provide a not existing destination in order to make the command fail. Since we implemented a fallback for our command that returns an empty list, we assert that we actually receive an empty list as response.
Now we are supplying the ERP system to use during testing as part of the systems.json (or .yml) file. We can use this mechanism for all our tests. Hence, you can adapt the beforeClass() method in the CostCenterServiceTest from step 4: replace the previous implementation with the same two statements as above in the GetCostCentersCommandTest:
@BeforeClass public static void beforeClass() throws URISyntaxException { mockUtil.mockDefaults(); mockUtil.mockErpDestination(); }
Simply run
mvn clean install as in the previous tutorials to test and build your application. Consider the following before deploying to Cloud Foundry.
Deploy the application on SAP Cloud Platform/, available since version 1.1.1), you need to similarly set the environment variable
ALLOW_MOCKED_AUTH_HEADER=true on your local machine before starting the local server, in addition to supplying the
destinations (as described in Step 4).
This wraps up the tutorial. Stay tuned for more tutorials on the SAP S/4HANA Cloud SDK on topics like caching and security!
Hi Ekaterina,
regarding the final part of your blog, I’ve tried with the latest 1.1.2 version of the archetype, but it seems that the Tomee plugin for local testing is not included. I had to manually add the plugin in the pom.xml file of the application.
However, even after this, the mvn tomee:run command seems to be working fine, but when I do I get an error: it says “This localhost page can’t be found”.
Do you have any tips?
Simmaco
Hi Simmaco,
I’ve just created a project as follows:
Then, I build and run the project with:
This is the plugin configuration that comes with the latest archetype:
Can you please share more details on your configuration?
Thanks!
Sander | https://blogs.sap.com/2017/06/23/step-5-resilience-with-hystrix/ | CC-MAIN-2017-43 | en | refinedweb |
Smoothx150 px) to the gallery folder.
CSS3 capable browsers will show a smoothly animated diagonal fade effect, while older browsers will fall back to a simpler but still perfectly usable non-animated version of the gallery.
The HTML
As usual, the first thing we do when starting work on a new project is to write the HTML.
index.html
<!DOCTYPE html> <html> <head> <meta charset="utf-8"/> <title>Smooth Diagonal Fade Gallery with CSS3 Transitions</title> <!-- The Swipebox plugin --> <link href="assets/swipebox/swipebox.css" rel="stylesheet" /> <!-- The main CSS file --> <link href="assets/css/style.css" rel="stylesheet" /> <!--[if lt IE 9]> <script src=""></script> <![endif]--> </head> <body> <div id="loading"></div> <div id="gallery"></div> <!-- JavaScript Includes --> <script src=""></script> <script src="assets/swipebox/jquery.swipebox.min.js"></script> <script src="assets/js/jquery.loadImage.js"></script> <script src="assets/js/script.js"></script> </body> </html>
The gallery depends on the jQuery library, which I've included before the closing body tag. I have also added a great little lightbox plugin called Swipebox, but you can easily replace it with the lightbox of your choice. The two main divs are #loading and #gallery. The first holds a loading gif, and the second the gallery photos. The #gallery div is set to
position:fixed so it takes the entire width and height of the page. The markup for the photos themselves is just as simplistic:
<a href="assets/photos/large/34.jpg" class="swipebox static" style="width:148px;height:129px;background-image:url(assets/photos/thumbs/34.jpg)"> </a>
The photos in the gallery are all 150x150 pixels, which means we will almost never achieve an exact fit for the entire page, unless we resize them a bit. This is exactly what has happened to the photo above, which is why it has a width and height value in its style attribute. You will see how we calculate this in the JS section.
Scanning for Photos with PHP
The photos are contained in two folders on the server -
assets/photos/thumbs/ for the thumbnails, and
assets/photos/large/ one for the full sizes. With PHP, we will scan the folders and output a JSON with the file names. You could alternatively return the images from a database, but you will have to keep the same structure. Here is the script:
load.php
// Scan all the photos in the folder $files = glob('assets/photos/large/*.jpg'); $data = array(); foreach($files as $f){ $data[] = array( 'thumb' => str_replace('large', 'thumbs', $f), 'large' => $f ); } // Duplicate the photos a few times, so that we have what to paginate in the demo. // You most certainly wouldn't want to do this with your real photos. // $data = array_merge($data, $data); // $data = array_merge($data, $data); // $data = array_merge($data, $data); header('Content-type: application/json'); echo json_encode(array( 'data' => $data, ));
Adding new photos to the gallery is as easy as copying the image and its thumbnail to the correct folder (both files should have the same name!). I have duplicated the photos a few times so we have a larger pool to show in the gallery, but you probably won't want to do this with your real photos.
Now that we have the JSON in place, let's write some JavaScript!
The JavaScript
Here is what we need to do:
- First we will issue an AJAX GET request to fetch all the photos on disk from the PHP script.
- Then we will calculate how many photos to show on the page and their sizes, depending on the dimensions of the window, so that they fit perfectly.
- We will preload all the images that will be shown on the current page with a preloader script that uses jQuery deferreds. In the meantime, we will show the #loading div.
- After everything is loaded, we will generate the markup for the photos and add them to the #gallery element. Then we will trigger the diagonal fade animation and initialize the Swipebox gallery.
- When the user clicks on an arrow, we will repeat steps 3 and 4 (with either a top-left or a bottom-right animation).
The code is too long for me to present in one go, so I will show it to you in parts. First, here is the overall structure that we will follow:
assets/js/script.js
$(function(){ // Global variables that hold state var page = 0, per_page = 100, photo_default_size = 150, picture_width = photo_default_size, picture_height = photo_default_size, max_w_photos, max_h_photos data = []; // Global variables that cache selectors var win = $(window), loading = $('#loading'), gallery = $('#gallery'); gallery.on('data-ready window-resized page-turned', function(event, direction){ // Here we will have the JavaScript that preloads the images // and adds them to the gallery }); // Fetch all the available images with // a GET AJAX request on load $.get('load.php', function(response){ // response.data holds the photos data = response.data; // Trigger our custom data-ready event gallery.trigger('data-ready'); }); gallery.on('loading',function(){ // show the preloader loading.show(); }); gallery.on('loading-finished',function(){ // hide the preloader loading.hide(); }); gallery.on('click', '.next', function(){ page++; gallery.trigger('page-turned',['br']); }); gallery.on('click', '.prev', function(){ page--; gallery.trigger('page-turned',['tl']); }); win.on('resize', function(e){ // Here we will monitor the resizing of the window // and will recalculate how many pictures we can show // at once and what their sizes should be so they fit perfectly }).resize(); /* Animation functions */ function show_photos_static(){ // This function will show the images without any animations } function show_photos_with_animation_tl(){ // This one will animate the images from the top-left } function show_photos_with_animation_br(){ // This one will animate the images from the bottom-right } /* Helper functions */ function get_per_page(){ // Here we will calculate how many pictures // should be shown on current page } function get_page_start(p){ // This function will tell us which is the first // photo that we will have to show on the given page } function is_next_page(){ // Should we show the next arrow? } function is_prev_page(){ // Should we show the previous arrow? } });
Some of the function definitions are left blank, but you can see them further down the page. The first group of variable definitions will hold the state of the gallery - dimensions, array of pictures, current page etc, which allows for a cleaner separation between the logic and the data. We will use custom events for better code organization (by listening for and triggering arbitrary named events). You can think for these event listeners as the methods of an object and the variables near the beginning as its properties.
After you've read through all the comments in the fragment above, proceed with the first event listener, which outputs the relevant slice of the images array depending on the current page:
gallery.on('data-ready window-resized page-turned', function(event, direction){ var cache = [], deferreds = []; gallery.trigger('loading'); // The photos that we should be showing on the new screen var set = data.slice(get_page_start(), get_page_start() + get_per_page()); $.each(set, function(){ // Create a deferred for each image, so // we know when they are all loaded deferreds.push($.loadImage(this.thumb)); // build the cache cache.push('<a href="' + this.large + '" class="swipebox"' + 'style="width:' + picture_width + 'px;height:' + picture_height + 'px;background-image:url(' + this.thumb + ')">'+ '</a>'); }); if(is_prev_page()){ cache.unshift('<a class="prev" style="width:' + picture_width + 'px;height:' + picture_height + 'px;"></a>'); } if(is_next_page()){ cache.push('<a class="next" style="width:' + picture_width + 'px;height:' + picture_height + 'px;"></a>'); } if(!cache.length){ // There aren't any images return false; } // Call the $.when() function using apply, so that // the deferreds array is passed as individual arguments. // $.when(arg1, arg2) is the same as $.when.apply($, [arg1, arg2]) $.when.apply($, deferreds).always(function(){ // All images have been loaded! if(event.type == 'window-resized'){ // No need to animate the photos // if this is a resize event gallery.html(cache.join('')); show_photos_static(); // Re-initialize the swipebox $('#gallery .swipebox').swipebox(); } else{ // Create a fade out effect gallery.fadeOut(function(){ // Add the photos to the gallery gallery.html(cache.join('')); if(event.type == 'page-turned' && direction == 'br'){ show_photos_with_animation_br(); } else{ show_photos_with_animation_tl(); } // Re-initialize the swipebox $('#gallery .swipebox').swipebox(); gallery.show(); }); } gallery.trigger('loading-finished'); }); });
Although the images are added to the #gallery div in a single operation, they are set to
opacity:0 with css. This sets the stage for the animation functions. The first of them shows the photos without an animation, and the latter two animate them in a wave from the top-left or the bottom-right. The animation is entirely CSS based, and is triggered when we assign a class name to the images with jQuery.
function show_photos_static(){ // Show the images without any animations gallery.find('a').addClass('static'); } function show_photos_with_animation_tl(){ // Animate the images from the top-left var photos = gallery.find('a'); for(var i=0; i<max_w_photos + max_h_photos; i++){ var j = i; // Loop through all the lines for(var l = 0; l < max_h_photos; l++){ // If the photo is not of the current line, stop. if(j < l*max_w_photos) break; // Schedule a timeout. It is wrapped in an anonymous // function to preserve the value of the j variable (function(j){ setTimeout(function(){ photos.eq(j).addClass('show'); }, i*50); })(j); // Increment the counter so it points to the photo // to the left on the line below j += max_w_photos - 1; } } } function show_photos_with_animation_br(){ // Animate the images from the bottom-right var photos = gallery.find('a'); for(var i=0; i<max_w_photos + max_h_photos; i++){ var j = per_page - i; // Loop through all the lines for(var l = max_h_photos-1; l >= 0; l--){ // If the photo is not of the current line, stop. if(j > (l+1)*max_w_photos-1) break; // Schedule a timeout. It is wrapped in an anonymous // function to preserve the value of the j variable (function(j){ setTimeout(function(){ photos.eq(j).addClass('show'); }, i*50); })(j); // Decrement the counter so it points to the photo // to the right on the line above j -= max_w_photos - 1; } } }
Next is the function that listens for the window resize event. This can arise whenever the browser window is resized or when the device orientation is changed. In this function we will calculate how many photos we can fit on the screen, and what their exact sizes should be so they fit perfectly.
win.on('resize', function(e){ var width = win.width(), height = win.height(), gallery_width, gallery_height, difference; // How many photos can we fit on one line? max_w_photos = Math.ceil(width/photo_default_size); // Difference holds how much we should shrink each of the photos difference = (max_w_photos * photo_default_size - width) / max_w_photos; // Set the global width variable of the pictures. picture_width = Math.ceil(photo_default_size - difference); // Set the gallery width gallery_width = max_w_photos * picture_width; // Let's do the same with the height: max_h_photos = Math.ceil(height/photo_default_size); difference = (max_h_photos * photo_default_size - height) / max_h_photos; picture_height = Math.ceil(photo_default_size - difference); gallery_height = max_h_photos * picture_height; // How many photos to show per page? per_page = max_w_photos*max_h_photos; // Resize the gallery holder gallery.width(gallery_width).height(gallery_height); gallery.trigger('window-resized'); }).resize();
The last line causes the function to be triggered right after it is defined, which means that we have correct values from the start.
The following helper functions abstract away some of the most often used calculations:
function get_per_page(){ // How many pictures should be shown on current page // The first page has only one arrow, // so we decrease the per_page argument with 1 if(page == 0){ return per_page - 1; } // Is this the last page? if(get_page_start() + per_page - 1 > data.length - 1){ // It also has 1 arrow. return per_page - 1; } // The other pages have two arrows. return per_page - 2; } function get_page_start(p){ // Which position holds the first photo // that is to be shown on the give page if(p === undefined){ p = page; } if(p == 0){ return 0; } // (per_page - 2) because the arrows take up two places for photos // + 1 at the end because the first page has only a next arrow. return (per_page - 2)*p + 1; } function is_next_page(){ // Should we show the next arrow? return data.length > get_page_start(page + 1); } function is_prev_page(){ // Should we show the previous arrow? return page > 0; }
They may be only a couple of lines long and used only once or twice, but they do a great deal towards making our code more readable.
The CSS
And finally, here is the CSS code. The photos have zero opacity by default, and have a scale transformation of 0.8 applied to them. They also have the transition property set, which will cause every change of an attribute to be animated. The .show class, which is added by the animation functions, raises the opacity and the scale of the element, which is automatically animated by the browser.
assets/css/styles.css
#gallery{ position:fixed; top:0; left:0; width:100%; height:100%; } #gallery a{ opacity:0; float:left; background-size:cover; background-position: center center; -webkit-transform:scale(0.8); -moz-transform:scale(0.8); transform:scale(0.8); -webkit-transition:0.4s; -moz-transition:0.4s; transition:0.4s; } #gallery a.static:hover, #gallery a.show:hover{ opacity:0.9 !important; } #gallery a.static{ opacity:1; -webkit-transform:none; -moz-transform:none; transform:none; -webkit-transition:opacity 0.4s; -moz-transition:opacity 0.4s; transition:opacity 0.4s; } #gallery a.next, #gallery a.prev{ background-color:#333; cursor:pointer; } #gallery a.next{ background-image:url('../img/arrow_next.jpg'); } #gallery a.prev{ background-image:url('../img/arrow_prev.jpg'); } #gallery a.show{ opacity:1; -webkit-transform:scale(1); -moz-transform:scale(1); transform:scale(1); }
The .static class is set by the
show_photos_static() function and it disables all animations (with the exception of opacity, as we want the hover effect to still be smooth) and shows the photos immediately (otherwise on every resize you would see the diagonal fade). You can see the rest of this file in the tutorial files, which you can download from the button near the top of the page.
We're done!
I hope that you like this little experiment and find many uses for this smoothly animated gallery.
Presenting Bootstrap Studio
a revolutionary tool that developers and designers use to create
beautiful interfaces using the Bootstrap Framework.
WOW ! Very nice. Thanks for sharing
Amazing one guys, But I am waiting for something better when someone clicks on image.
I have error on my console XMLHttpRequest cannot load ~/Downloads/dfg/load.php. Origin null is not allowed by Access-Control-Allow-Origin.
You probably tried to load the gallery by clicking the index.html. This won't work, because browsers don't allow AJAX requests to local files for security reasons (even if they did, load.php will be returned as a text file and not executed).
To make it work, you will have to open it through a locally running web server (like apache as part of XAMPP/MAMP/WAMP etc) or to upload it to a web host.
This is realy great guys, thanks for sharing!
Amazing! Thanks!
Now that is something I will use over and over again! Great job man, really great!
Amazing stuff. Loved it
Hi, wow..nice gallery!! Thanks for the source and tutorial.
Hi Sir,
This is a great gallary, but I am a bit confused with the "next page arrow"
From your example, there are only 35 pic but you set per_page = 100 in the script. It should not show the next arrow right? But it just shows three pages from your example.
Can you elaborate how this works please?
Thanks!
The per_page is set to 100 by default, but this is re-calculated on load and on every window/resize. I could have written it as
per_page = nullwith the same effect.
Hi,
Great gallery!
Is it possible to change this gallery so that it shows perhaps 7-9 images per page in a constrained space (e.g. 450 x 450px instead of full background)?
Thanks!
Hi, can u please give me a code snippet to load images from database to the same structure that you have made?
Unfortunately I don't have one ready. You will have to read the database records and output the same json that I use. Maybe someone from the community will post an example eventually.
nice post~is it possible to use timthumb to generate thumbnails automatically? That would be more convenient for site management
Fantastic! However, how to automate the transition?
Here is one way to do it.
Great tutorial. Design is lovely, code is very clean, everything feels right. :)
On a side note, you probably should have used an unordered list instead of a bunch of lonely anchor tags. I think it would be semantically better.
Thank you! Yes, it would be semantically correct, but the markup is generated with JavaScript so there will be no benefits that I can think of, it will only make things a tiny bit more complicated on the CSS side.
Love the Swipebox.
This is the best gallery i've seen so far, is it possible to somehow incorporate the description from pictures contained in metadata? or done manually? thanks, and congrats for your work
Beautiful work!
Don't know why it won't work properly(e.g. photos won't show up or the fade effect only start from top left) if I change the name of the folder.
Simply Cool! great work. The only thing to be conscious of is that this technique isn't always that great, and seems to not work for some people, but it works with me in Chrome, Firefox and Safari.
This is beautiful!
I love your tutorials, brilliant stuff
Amazing. I'm going to build something like this. Thanks for the tutorial. | https://tutorialzine.com/2013/05/diagonal-fade-gallery | CC-MAIN-2017-47 | en | refinedweb |
Teleconference.2008.05.28/Minutes
These minutes have been approved by the Working Group and are now protected from editing. (See IRC log of approval discussion.)
Contents
- Present
- Boris Motik, Elisa Kendall, Ivan Herman, Jeremy Carroll, Rinke Hoekstra, Ian Horrocks, Alan Ruttenberg, Peter Patel-Schneider, Martin Dzbor, Evan Wallace, Michael Smith
- Regrets
- Jie Bao, Markus Krötzsch (conflicting meeting), Sandro Hawke (RIF f2f meeting)
- Chair
- Alan Ruttenberg
- Scribe
- Zhe Wu
Accept previous minutes
PROPOSED: accept previous previous minutes
Alan Ruttenberg: Jeff, need more work?
Jeff Pan: tried to incorporate Peter's comments
Alan Ruttenberg: consider it not ready
PROPOSED: Thank Jeremy Carroll for his exemplary service to the WG and wish him well in his new employment
Alan Ruttenberg: Jeremy's last meeting. we all thank him!
RESOLVED: Thanks Jeremy Carroll for his exemplary service to the WG and wish him well in his new employment
Action items status
Alan Ruttenberg: pending review actions
Action 143 Put editorial note in profiles document
Action 43 Develop scripts to extract test cases from wiki. closed.
Action 139 Sheperd/coordinate the patching process (per Issue 119)
Ian Horrocks: good progress made. don't mind leaving it open
Alan Ruttenberg: estimation?
Ian Horrocks: sometime before next F2F
Michael Schneider: The action itself can be closed. Issue 119 should be left open. expect to have the first draft somewhere in June so we have enough time before F2F. I am working on it.
Issues
Issue 21 and Issue 24 Imports and Versioning
Ian Horrocks: already have text based on Peter, Boris, AlanR's discussion
Ian Horrocks: alanr has some issues
Alan Ruttenberg: first one, not importing multi version of the same ontology. second, owl:incompatibleWith
Ian Horrocks: not clear to me that we can resolve it now
Michael Schneider: Issue 21 about import, it is not clear to me
Boris Motik: answer to alanr's comment. if two onotlogies are marked incompatible, it is better to say nothing when multi version imported. current spec says nothing when multi version imported. if you need validation, it is out of the scope.
Ian Horrocks: all versions are treated as advisory, rather than formal
Boris Motik: you get the union of multi versions. SPEC provides no mechanism for detecting this. you can implement on top of OWL 2.
Boris Motik: i implemented what I thought that we agreed.
Ian Horrocks: alanr, are you arguing about what you want, or the process
Alan Ruttenberg: at the workshop, we did not have a solution. But later peter sent a follow-up email
Ian Horrocks: alanr, do you like the SPEC to include precise statement on what will happen if two versions of the same ontology are imported?
Alan Ruttenberg: I like to say what peter said that ontology should not import multi versions
Boris Motik: Actually the SPEC is precise on that.
Jeremy Carroll: A bit disappointed that import TF is not decisive
Ian Horrocks: To be fair, everyone thought that we have agreed. and then implemented what we agreed.
Ian Horrocks: Jeremy said that SHOULD is the right thing to say
Peter Patel-Schneider: I agree with Jeremy
Peter Patel-Schneider: I am happy with the way it is. Put SHOULD in to make some folks in the WG happy. right now, SHOULD is not there
Boris Motik: sure. that is ok. if we can close the issue
Ian Horrocks: think it is useful. maybe we can converge and resolve it
Alan Ruttenberg: think it is just editorial. owl:incompatibleWith, shall we discuss it as a separate issue?
Ian Horrocks: it is semantic free
Alan Ruttenberg: it carries some weight on what people think their tools should do
Ian Horrocks: like to have some text clarifying "SHOULD." At least add a pointer.
Boris Motik: I changed the text. took out the offending paragraph. add "SHOULD NOT". . Hope it solves the problem
Jeremy Carroll: IanH raised a good point that SHOULD is advisory
Boris Motik: for SHOULD, MAY, ..., there is a disclaimer at the beginning
Alan Ruttenberg: don't want to change semantics as well
Boris Motik: prefer lower case and do a review. Later, change "should" systematically
Jeremy Carroll: you can always rephrase "should." It makes things simple for the readers
Ian Horrocks: we all in agreement now?
Alan Ruttenberg: Boris, are you going to put something similar in the document for incompatibleWith?
Ian Horrocks: are we voting on should => SHOULD, or incompatibleWith?
PROPOSED: spec should state that an ontology SHOULD NOT import two incompatible versions
PROPOSED: SPEC should state that an ontology SHOULD NOT?
Ian Horrocks: getting into too much details on wording. it would be better if you guys figure this out precisely off line
Ian Horrocks: we had enough discussion on this issue. come back next week
Issue 124
Issue 124 (newly open) The complement of a datarange is defined relative to the whole data domain
Alan Ruttenberg: consensus is this is how things are.
Michael Schneider: Boris's comments are valid. The only thing is we could have this thing in the primer. suspect people will ask how to do complement on just the data type.
PROPOSED: The complement of a datarange is defined relative to the whole data domain (close as resolved Issue 124)
Alan Ruttenberg: m_schnei can put a comment in the primer
Jeremy Carroll: for OWL2 FULL, complementOf should be on the data type.
Michael Schneider: in owl full, if you take complement of xsd:integer, then you get owl:Thing minus xsd:integer
Uli Sattler: this piece of advice perhaps is too detailed for primer. should go somewhere indeed
Alan Ruttenberg: time is past. let us continue on email
Issue easy keys
Alan Ruttenberg: just to check we are on the same page on easy keys. both easy keys/top bottom added to spec, with formal addition to language based on vote. can we do a straw poll
Peter Patel-Schneider: don't think your description match minutes
Peter Patel-Schneider: the straw poll wording in the minutes does not mention documentation change
Ian Horrocks: one question on top/bottom, do we agree on the name?
Alan Ruttenberg: not
Alan Ruttenberg: add them as top and bottom, . and an editorial note saying that names are not final
Boris Motik: implementing universal role is hard. I am not convinced it is "easy." I like to keep it separate from easy keys
Alan Ruttenberg: where do we stand on easy keys?
Peter Patel-Schneider: not aware of implementation of easy keys
Michael Schneider: missing major stakeholders, shall we defer the decision?
Alan Ruttenberg: my sense is that majority of this WG are stakeholders and they are for it
Boris Motik: thinking about implementing easy keys. not trivial, should not be impossible. We should have larger scale evaluation though.
Alan Ruttenberg: we should have general discussion on these next week
Issue 109
Issue 109: What is the namespace for elements and attributes in the XML serialization
Alan Ruttenberg: 1) namespace itself. 2) should we reuse the same namespace
Ivan Herman: namespace in terms of XML, and namespace used in RDF/OWL are very different. I am in favor of two different namespaces to avoid problems for OWL/XML
Ivan Herman: we decided to use owl namespace for the whole thing. so 1) is ruled out. don't care other three
Alan Ruttenberg: suggest 3. The year there give us possibility to evolve)
Uli Sattler: just curious to hear what problems will come up if we only have one namespace
Ivan Herman: there are lots of discussion in XML world of what exactly the semantics of namespace is. a word of caution is not to mix up things
Uli Sattler: then it seems like something we should not decide now. need more information before we can make a decision.
Ivan Herman: why it is a big problem to separate the two?
Ivan Herman: if we decide to have a different one. I don't care which
Issue 112
Alan Ruttenberg: Issue 112 What name to give to Universal Property. Consensus: not trying meaningful name
Issue 104
Issue 104 disallowed vocabulary OWL 1.1 DL does not have a disallowed vocabulary
Michael Schneider: in old OWL SPEC, have disallowed vocabulary. However, in the new RDF mapping document, we don't have something similar. e.g. having rdf:List is allowed in the new spec but not in the old spec
Boris Motik: don't think this belong to the mapping document. Check section 2.2.of FS
Ivan Herman: boris, fully agree. OWL/XML namespace should not have any new terms. it is irrelevant
Boris Motik: it does have elements from OWL/XMl schema. I will change it after tele conf
Michael Smith: on tests. make progress next week (before next F2F). I am willing to be aggregation point | http://www.w3.org/2007/OWL/wiki/Teleconference.2008.05.28/Minutes | CC-MAIN-2015-48 | en | refinedweb |
rpc_ns_group_delete- deletes a group attribute
#include <dce/rpc.h>
void rpc_ns_group_delete( unsigned32 group_name_syntax, unsigned_char_t *group_name, unsigned32 *status);
Input
- group_name_syntax
- An integer value that specifies the syntax of argument group_name. (See
Name Syntax Constantsfor the possible values of this argument.)
The value rpc_c_ns_syntax_default specifies the syntax specified by the RPC_DEFAULT_ENTRY_SYNTAX environment variable.
- group_name
- The name of the group to delete. The group name syntax is identified by the argument group_unsupported_name_syntax
Unsupported name syntax.
The rpc_ns_group_delete() routine deletes the group attribute from the specified entry in the name service database.
Neither the specified entry nor the entries represented by the group members are deleted.
Permissions Required
The application needs write permission to the target name service entry.
None.
rpc_ns_group_mbr_add()
rpc_ns_group_delete().
Please note that the html version of this specification may contain formatting aberrations. The definitive version is available as an electronic publication on CD-ROM from The Open Group. | http://pubs.opengroup.org/onlinepubs/9629399/rpc_ns_group_delete.htm | CC-MAIN-2015-48 | en | refinedweb |
The objects that represent our windows are of class ZHelloWorld_Window. This is a subclass of ZWindow and of ZPaneLocator:
class ZHelloWorld_Window : public ZWindow, public ZPaneLocator { public: ZHelloWorld_Window(ZApp* inApp); ~ZHelloWorld_Window(); // From ZEventHr via ZWindow virtual void DoInstallMenus(ZMenuInstall* inMenuInstall); virtual void DoSetupMenus(ZMenuSetup* inMenuSetup); virtual bool DoMenuMessage(const ZMessage& inMenuMessage); // From ZPaneLocator virtual bool GetPaneLocation(ZSubPane* inPane, ZPoint& outLocation); protected: ZWindowPane* fWindowPane; ZUICaptionPane* fHelloPane; };
Looking in ZWindow.h, we see that ZWindow is a subclass of ZOSWindowOwner, ZMessageLooper, ZMessageReceiver and ZFakeWindow.
ZOSWindowOwner links ZWindow to the real windows supplied by the operating system's GUI layer. You will find the implementations of the different OS windows in each of the subdirectories of zoolib/platform: ZOSWindow_Mac, ZOSWindow_Windows and so on.
ZMessageLoopers may have messages posted to them and are responsible for dispatching them to the ZMessageReceivers, which receive and handle them. Windows handle messages not only to respond to menu commands, but also to GUI events like keypresses, mouse clicks, activations, notifications that the window needs to draw, resizing and so on. You can also define your own messages to allow different threads to communicate among each other or to themselves.
ZFakeWindow is a subclass of ZEventHr, which is defined to respond to most GUI events. ZHelloWorld_Window overrides several of ZEventHr's methods to provide menu handling, similar to the menu handling provided by the application object.
ZHelloWorld_Window is a subclass of ZPaneLocator. The ZPaneLocator class is a central concept in the management of ZooLib graphical user interfaces, and it is very powerful and flexible, but it seems to be difficult for most beginners to learn to work with. I had a hard time with it myself, but I found it very worthwhile to learn how to use it well. I will discuss it in some detail in this book, returning to it several times.
ZPaneLocators serve several functions, the first of them being the layout of widgets in windows. ZPaneLocators have a number of other duties that I will get to later.
In most GUI frameworks, the location and size of each widget are stored as member variables in the widget. This is even the case for non-object oriented toolkits, such as the Mac OS Control Manager, where a Mac OS Button stores its own location in a data structure.
This works well for the most part, but is difficult to work with when the layout of the window is complex and must be flexible. If we want the widgets to rearrange themselves as the window is resized, or to automatically adjust for the width of label text that may be translated into different languages, it is hard for the individual widgets to know how to adjust.
It is particularly inflexible if the windows are designed with a graphical layout tool that saves the widget coordinates in files, such as Mac or Windows resource files.
In ZooLib, individual GUI widgets are not responsible for knowing their own sizes or locations. Instead, they hold a pointer to their ZPaneLocator, and any inquiries about dimensions are passed on to the locator. Typically a ZPaneLocator manages a number of different widgets and can carry out the calculations needed to keep them arranged relative to each other.
ZHelloWorld_Window is a simple ZPaneLocator, it only serves to provide the location of its subpanes:
bool ZHelloWorld_Window::GetPaneLocation(ZSubPane* inPane, ZPoint& outLocation) { if (inPane == fHelloPane) { ZPoint theSize = inPane->GetSize(); ZPoint superSize = inPane->GetSuperPane()->GetInternalSize(); outLocation = (superSize - theSize) / 2; return true; } return ZPaneLocator::GetPaneLocation(inPane, outLocation); }
GetPaneLocation is passed a pointer to the ZSubPane whose location is needed, and a reference to the ZPoint where the location is to be stored. The code here tests if the subpane is the one whose pointer is stored in the member variable fHelloPane, if it is, it calls the subpane's GetSize() method to find its size, and the pane's superpane's GetInternalSize() method to find the size of the pane it is inside of. Then it divides the vector difference of these by two to get the value to place in outLocation. This has the effect of centering fHelloPane in its superpane.
GetPaneLocation returns true if it handled the call by supplying the location, otherwise it passes on the call by returning the result of ZPaneLocator::GetPaneLocation. With this, it is possible to chain PaneLocators that handle different responsibilities.
Now what does ZSubPane::GetSize() do? We can have a look at the source code in ZPane.cpp:
ZPoint ZSubPane::GetSize() { ZPoint theSize; if (fPaneLocator && fPaneLocator->GetPaneSize(this, theSize)) return theSize; if (fSuperPane) return fSuperPane->GetInternalSize(); return ZPoint::sZero; }
If the pane has a non-nil ZPaneLocator pointer, then it called GetPaneSize to ask the pane locator for the size. If it has no locator, then it asks for the internal size of its superpane, so if there is no pane locator, it fills up its whole superpane. If it has no superpane it defaults to (0,0). Thus we see that by default, ZSubPanes do not know about their sizes on their own. The code for ZSubPane::GetLocation() is similar.
It is possible to override GetSize() and provide a size directly if you want to do so. That makes the most sense for widgets that will always be the same size. Alternatively, you can do this in a superpane that wants to set its size to just surround all its subpanes.
The constructor for the ZHelloWorld_Window calls its base class constructor, passing it the ZApp pointer (used as a ZWindowSupervisor pointer) and a pointer to a ZOSWindow it has just created. It also constructs its other base class, ZPaneLocator, by passing nil as the next locator in the chain, to indicate that there are no others. Then it creates the window's content:
ZHelloWorld_Window::ZHelloWorld_Window(ZApp* inApp) : ZWindow(inApp, sCreateOSWindow(inApp)), ZPaneLocator(nil) {!"); }
You will need to provide a function like sCreateOSWindow for each window variation you wish to create. If it is a member of your window's class it should be declared static as it is used during the constructor initializer list, when the window object is not completely constructed yet (in general you should never pass "this" as a parameter to functions called from initializer lists, not even implicitly by calling your own non-static member functions. It is permissible to pass "this" to base class member functions, as any base classes have already been constructed)
sCreateOSWindow is responsible for creating the real window on the screen that is managed by the operating system:
static ZOSWindow* sCreateOSWindow(ZApp* inApp) { ZOSWindow::CreationAttributes attr; attr.fFrame = ZRect(0, 0, 200, 80); attr.fLook = ZOSWindow::lookDocument; attr.fLayer = ZOSWindow::layerDocument; attr.fResizable = true; attr.fHasSizeBox = true; attr.fHasCloseBox = true; attr.fHasZoomBox = true; attr.fHasMenuBar = true; attr.fHasTitleIcon = false; return inApp->CreateOSWindow( attr ); }
First you initialize a ZOSWindow::CreationAttributes structure with your options for the size, appearance and behaviour of the window. Note the fLook and fLayer options; the look and feel of the window are specified separately. Using ZOSWindow::lookDocument and ZOSWindow::layerDocument creates an ordinary kind of window. You can also create windows with appropriate appearances for modal dialogs, movable modal dialogs, tool palettes and so on. From ZOSWindow.h:
enum Look { lookDocument, lookPalette, lookModal, lookMovableModal, lookAlert, lookMovableAlert, lookThinBorder, lookMenu, lookHelp };
ZooLib allows windows to be managed in different ways, to provide normal window behaviour, or modal dialogs (windows that must be dealt with by the user before work can continue), windows that float above the rest, and "sinkers" or windows that stay at the bottom of the heirarchy. The selections available are again found in ZOSWindow.h:
enum Layer { layerDummy, layerSinker, layerDocument, layerFloater, layerDialog, layerMenu, layerHelp, layerBottomMost = layerDummy, layerTopMost = layerHelp };
Now we examine the body of ZHelloWorld_Window's constructor:!");
First we set the window's title.
Then we set the window's background inks. These are the colors that will be drawn if nothing else is - that is, the window will be erased with these colors when an update starts, before the contents are drawn. There are two inks that are provided to SetBackInks; the first is used when the window is active, and the second is used when the window is in the background.
Here is an example of using a factory, in this case the ZUIAttributeFactory, to enable a standard appearance for the application. It calls GetInk_WindowBackground_Dialog to get the normal ink for dialog windows according to the platform standard. For the deactivated ink, we construct a fixed yellow color ink; thus the window will turn yellow when it is no longer in the front.
When a window is constructed, it has no panes in it. The panes are where the actual drawing takes place, and they are the ultimate recipients of UI events like mouse clicks and keystrokes. First we must construct a special pane that takes up the whole window, by allocating a ZWindowPane. We store a pointer to it in fWindowPane.
Finally we get to the real meat of our application, shouting "Hello World!":
fHelloPane = ZUIFactory::sGet()->Make_CaptionPane(fWindowPane, this, "Hello World!");
We obtain a pointer to the ZUIFactory through its static method sGet(). Then we call Make_CaptionPane to allocate a new ZUICaptionPane that says "Hello World!". The interface to Make_CaptionPane is:
virtual ZUICaptionPane* Make_CaptionPane(ZSuperPane* inSuperPane, ZPaneLocator* inLocator, const string& inLabel);
All ZSubPanes will need to have pointers to their ZSuperPane and ZPaneLocator provided. It is permissible to pass nil for each of these; a nil superpane indicates the subpane is not attached to any window yet, and a nil ZPaneLocator indicates the defaults are to be used for such things as pane size and location.
Once a ZUICaptionPane is placed in a window, it takes care of keeping itself updated. No more work is required to spread our greetings to the world. However, if you want to implement your own subclass of ZSubPane to provide custom drawing, you should do the actual rendering in an override to ZSubPane's DoDraw method.
You never explicitly delete a ZWindow pointer. Ordinarily ZWindows are deleted by ZooLib in response to the user clicking the window's close box. But you can provide a destructor for your window. ZHelloWorld_Window does not do anything in its destructor:
ZHelloWorld_Window::~ZHelloWorld_Window() {}
A lot happens behind the scenes in the base class destructors though. ZooLib will delete all the subpanes, the menu bar and menus, and dispose of the operating system window.
You can close a window yourself, and cause its eventual deletion, by calling ZWindow::CloseAndDispose().
On all the systems besides the Mac OS, normal behaviour is for the application to quit after the last window is closed. On the Mac, the application normally stays running with only the top menu bar remaining. ZooLib does not keep count of your windows for you, so it is possible to close all of the windows and have the application object still running. This is the case with ZHelloWorld as the source is provided, and it is a bug. If you close all the windows without selecting "Quit" from the File menu first, the process will be left running with no user interface. You will have to kill the process with the Windows task manager or the kill command on BeOS or Unix.
One simple way to deal with this is for your application object to keep a count of open windows. When a new window is created, it sends a message to the ZApp informing of its birth. It also sends a notification from its destructor. When the count of windows reaches zero, you call PostRequestQuitMessage from the ZApp object. ZooLib will take care of shutting down your application object, and then its Run method will return to ZMain, where you will then return and ZooLib will terminate the program.
The window provides a menu in addition to those provided by the application object:
void ZHelloWorld_Window::DoInstallMenus(ZMenuInstall* inMenuInstall) { ZWindow::DoInstallMenus(inMenuInstall); ZRef<ZMenu> helloMenu = new ZMenu; inMenuInstall->Append("&Hello", helloMenu); helloMenu->Append(mcHello_Again, "&Hello World Again!", 'H'); helloMenu->AppendSeparator(); helloMenu->Append(mcHello_Pixmap, "My New Niece"); helloMenu->Append(mcHello_TextResource, "Text Resource"); helloMenu->Append(mcHello_TextHardCoded, "Text (Hard coded)"); }
ZHelloWorld::DoInstallMenus creates a menu titled "Hello" that has several items in it, one for creating a new Hello World window, as well as showing a picture of Andy's newborn niece Amy from a BMP graphic stored in a resource file, retrieving the message from a resource file, and inserting hardcoded text into the message
Before ZooLib responds to a mouse click on the menu bar, it calls DoSetupMenus to allow your code to enable or disable menu items according to the current state of the application or document. It is important to provide the DoSetupMenus implementation because the menu items are disabled by default:
void ZHelloWorld_Window::DoSetupMenus(ZMenuSetup* inMenuSetup) { ZWindow::DoSetupMenus(inMenuSetup); inMenuSetup->EnableItem(mcClose); inMenuSetup->EnableItem(mcHello_Again); inMenuSetup->EnableItem(mcHello_Pixmap); inMenuSetup->EnableItem(mcHello_TextResource); inMenuSetup->EnableItem(mcHello_TextHardCoded); }
DoSetupMenus is passed a pointer to a ZMenuSetup object. Call its EnableItem member function with the menu command constant to enable an item.
I need to check with Andy about this, I see a comment in ZMenu.h that indicates that EnableItem is deprecated.
Menu messages may be handled by a window or by its ZWindowSupervisor, the application object in our case. This allows common functions to be handled in a central location by the application, but allows menu commands that are particular to a document to be handled by the window that holds the document.
bool ZHelloWorld_Window::DoMenuMessage(const ZMessage& inMenuMessage) { switch (inMenuMessage.GetInt32("menuCommand")) { case mcClose: { this->CloseAndDispose(); break; } case mcHello_Again: { ZWindow* theWindow = new ZHelloWorld_Window(ZApp::sGet()); theWindow->Center(); theWindow->BringFront(); theWindow->GetLock().Release(); return true; } case mcHello_Pixmap: { ZDCPixmap thePixmap = ZUIUtil::sLoadPixmapFromBMPResource(kRSRC_BMP_Amy); fHelloPane->SetCaption(new ZUICaption_Pix(thePixmap), true); break; } case mcHello_TextResource: { string theText = ZString::sFromStrResource(kRSRC_STR_HelloWorld); ZRef<ZUIFont> theUIFont = ZUIAttributeFactory::sGet()->GetFont_SystemLarge(); ZRef<ZUICaption> theUICaption = new ZUICaption_Text(theText, theUIFont, 0); fHelloPane->SetCaption(theUICaption, true); break; } case mcHello_TextHardCoded: { string theText = ZString::sFromStrResource(kRSRC_STR_HelloWorld); ZDCFont theDCFont = ZDCFont::sApp9; theDCFont.SetStyle(theDCFont.GetStyle() | ZDCFont::underline); ZRef<ZUIFont> theUIFont = new ZUIFont_Fixed(theDCFont); ZRef<ZUICaption> theUICaption = new ZUICaption_Text("Hello World! (hard coded)", theUIFont, 0); fHelloPane->SetCaption(theUICaption, true); break; } } return ZWindow::DoMenuMessage(inMenuMessage); }
Another important concept within ZooLib is the ZMessage. ZMessages allow formatted packets of data to be communicated between threads or within a thread. ZMessages store data of different types that are accessed by name and type. In the case of a menu command, the data stored is a 32-bit integer, and its name is "menuCommand". There will be much to say about ZMessages later on.
Here we switch according to the command after retrieving its value:
switch (inMenuMessage.GetInt32("menuCommand"))
The different "mc" constants are defined at the top of the file as integer values following mcUser:
#include "ZMenuDef.h" //... #define mcHello_Again mcUser + 1 #define mcHello_Pixmap mcUser + 2 #define mcHello_TextResource mcUser + 3 #define mcHello_TextHardCoded mcUser + 4
If you look in ZMenuDef.h, you will see that it defines a number of standard UI commands, like mcAbout, which is the command to display an "About Box". It also defines mcUser. The command numbers less than mcUser are reserved for definition by ZooLib, although usually intended to be implemented by your own code. The values above mcUser are for your use as you please.
We'll go into the details of what each of the menu commands do later. But for now notice the last line of the function, which passes off any unknown menu commands to the window:
return ZWindow::DoMenuMessage(inMenuMessage); | http://zoolib.sourceforge.net/doc/cookbook/ch02s03.html | CC-MAIN-2015-48 | en | refinedweb |
The Question is:
How can I print to HP Laserjets via UCX?
What can you tell me about UCX$TELNETSYM* logicals.
I have looked, found several articles, and as a result upgraded to the
latest version of UCX.
And installed the ECO.
But I still get a leading blank page. One message says to utilize the
UCX$TELNETSYM* logicals.
What can you tell me about them? Any suggested values? What files should I
have these logical definitions.
Thanks in advance,
Lyndon
The Answer is :
The following discusses a variety of common questions concerning IP
printing using telnet and lpr/lpd, including commonly-seen problems
such as blank pages and the printer-generated banner pages, and the
configuration of various Compaq and third-party printers, DECserver
terminal server printing, and discussions of device control libraries,
and a variety of device-control issues. Numerous pointers to topics
associated with IP printing are included at the end of this topic.
--
For blank-pages using standard serial queues and device control libraries:
Include a reset module in the device control library that sends the
OpenVMS print symbiont the appropriate string.
$ LIBRARY/TEXT/EXTRACT=reset_module_name -
/OUTPUT=reset_module_name.TXT
SYS$LIBRARY:devctl-name.TLB
If you are using the default device control library SYSDEVCTL.TLB
and cannot locate it, see below for creation instructions.
Use any editor to add the blank page suppression control sequence to
the reset module text extracted from the library. The sequence is:
{ESC}]VMS;2{ESC}\
Where {ESC} indicates the ASCII escape character. (This sequence
is processed by and controls the activities of the OpenVMS print
symbiont, and the sequence -- when correctly formated -- will not
be sent to the printer for processing. This sequence is specific
to OpenVMS and the OpenVMS Print symbiont, and not the printer.)
Alternatively, depending on the particular printer, you may need to
use the following PCL reset module sequence to suppress the formfeed:
{ESC}P{ESC}E{ESC}\
Where {ESC} represents the Escape character, {ESC}E is a PCL command,
and {ESC}\ is the PCL terminator. (This sequence will be sent to the
printer, and will be interpreted by the printer.)
As shown above, device control modules requiring any embedded HP PCL
sequences should always be bracketed between the sequence {ESC}P and
the sequence {ESC}\.
Stop all queues using the device control library, and then insert the
(updated) module back into the device control library:
$ STOP/QUEUE/NEXT queue-name
$ LIBRARY/INSERT/TEXT -
SYS$LIBRARY:devctl_name.TLB -
reset_module_name.TXT
Restart each queue using this device control library.
Certain printers may require a different approach when suppressing
the formfeed (when accessing the printer via telnet symbiont), via
the definition of the appropriate logical name:
$ DEFINE/SYSTEM/EXECUTIVE UCX$TELNETSYM_SUPPRESS_FORMFEEDS 1
$ DEFINE/SYSTEM/EXECUTIVE TCPIP$TELNETSYM_SUPPRESS_FORMFEEDS 1
If you wish individual control over the settings for specific telnet
queues, you can define the logical name appropriately, start the
queue(s) requiring the particular setting, and then redefine the
logical name and start the remainder of the telnet queues.
For additional information on device control libraries, see topic
(2771).
Also determine if the print queue should be set to /NO_INITIAL_FF:
INITIALIZE/QUEUE/NO_INITIAL_FF or SET QUEUE/NO_INITIAL_FF
For additional information and pointers for TELNETSYM printing an
blank pages, please see the TCP/IP Services management documentation
and please see topic (8175).
--
The HP PCL code for portrait mode follows:
{ESC}&l0O
This is the escape character, an ampersand, a lowercase letter L,
a zero, and an uppercase letter O.
--
To associate a device control library module with a particular defined
printing form, use a DCL command similar to the following:
$ DEFINE/FORM -
/PAGE_SETUP=page_setup_module_name -
/SETUP=device_setup_module_name -
form_name form_number
The /PAGE_SETUP qualifier causes module page_setup_module_name to
be sent to the printer before each page is printed, while the
/SETUP qualifier causes the device_setup_module_name to be sent
for each new print file sent to the printer.
Also of interest here can be the /STOCK qualifier (not shown),
as this can control the behaviour of the printer when specific
printing stock is (or is not) loaded into the printer. Using
the stock, you can prevent program listings from being printed
out on, say, the blank check or the mailing label stock that
happens to be loaded into the printer. The name of the stock
defaults to the name of the form. If you do not wish to
specify the stock, you will want to use /STOCK=DEFAULT when
defining the form.
To associate the form with a particular print queue for all jobs
printed to the queue, use the DCL command:
$ INITIALIZE/QUEUE/DEFAULT=(FORM=form_name) queue_name
Related commands include SET QUEUE, START/QUEUE, and PRINT/FORM.
The default device control library is named SYSDEVCTL.TLB. See
below for library creation instructions.
--
For blank-pages using TCP/IP Services for OpenVMS symbionts:
For information on the printer port, contact the printer vendor.
Various HP printers have traditionally used port 9100. (See
the section on printer ports below for other printers.) You can
confirm this with the vendor, or with commands such as:
$ telnet printername 9100
$ SET HOST/TELNET printername/port=9100
Issue the above command to connect to the printer's raw port (NOT
the telnet port!) via telnet, type some text, and then enter a
printer specific command (such as {CTRL/L} or {CTRL/D} on HP
printers) to flush the buffer. If the printer accepts this
sequence and prints something resembling what was entered, then
you know that is the right port. 9100 is typically a raw TCP port.
For example:
$ telnet prtnic /port=9100
%TELNET-I-TRYING, Trying ... ww.xx.yy.zz
%TELNET-I-SESSION, Session 01, host prtnic, port 9100
-TELNET-I-ESCAPE, Escape character is ^]
Now enter a {CTRL/T} character.
If the printer is set to Postscript and is functioning correctly,
the response should appear similar to the following:
%%[ status: idle ]%%
Port 9100 may or may not be the port used on any particular printer
NIC. (No, the OpenVMS Wizard does not have a list of the ports used
on arbitrary printers -- please check the printer documentation or
check with the printer vendor.)
DIGITAL TCP/IP Services for OpenVMS (UCX) V4.1 or V4.2 (or later,
as available) is recommended. Use of the appropriate ECO kits is
also recommended.
The typical command to create and start the symbiont queue under
UCX can look like this:
$ initialize/queue/start queue_name -
/on="UCX$QUEUE:insert_port_here" -
/process=ucx$telnetsym
--
IP Printer Ports:
For information on ports chosen for various printers, NICs, terminal
servers, or DECserver devices, please see the documentation that is
associated with the device. Also please see listings in topics (3960),
(4045), and (6975).
--
TCP/IP Services Logical names:
Related logical names -- covered in the TCP/IP Services documentation
in rather more detail -- can include (on releases prior to TCP/IP
Services V5.0) UCX$TELNETSYM_IDLE_TIMEOUT, UCX$TELNETSYM_RAW_TCP, and
UCX$TELNETSYM_SUPPRESS_FORMFEEDS. Alternatively, see the documentation
for TCPIP$TELNETSYP_IDLE_TIMEOUT, TCPIP$TELNETSYM_RAW_TCP, and
TCPIP$TELNETSYM_SUPPRESS_FORMFEEDS on TCP/IP Services V5.0 and later.
Messages such as "open_socket_ast invoked with bad IOSB 660: connect
to network object rejected." in the log file can be triggered by a
retry interval that is too short for the particular printer -- faster
printers can use shorter intervals, though faster polling can also
increase host overhead. (Values such as 15 to 30 seconds can be
appropriate for many printers.) To tell TCP/IP Services LPD to retry
the print job every minute:
$ DEFINE/SYSTEM/EXECUTIVE UCX$LPD_RETRY_INTERVAL "0 00:01:00.00"
$ DEFINE/SYSTEM/EXECUTIVE TCPIP$LPD_RETRY_INTERVAL "0 00:01:00.00"
In particular, you may also need to define one (or more) of
the following logical names:
$ DEFINE/SYSTEM/EXECUTIVE UCX$TELNETSYM_RAW_TCP 1
$ DEFINE/SYSTEM/EXECUTIVE TCPIP$TELNETSYM_RAW_TCP 1
$ DEFINE/SYSTEM/EXECUTIVE UCX$TELNETSYM_SUPPRESS_FORMFEEDS 1
$ DEFINE/SYSTEM/EXECUTIVE TCPIP$TELNETSYM_SUPPRESS_FORMFEEDS 1
$ DEFINE/SYSTEM/EXECUTIVE UCX$TELNETSYM_IDLE_TIMEOUT "0 00:00:30.0"
$ DEFINE/SYSTEM/EXECUTIVE TCPIP$TELNETSYM_IDLE_TIMEOUT "0 00:00:30.0"
Similar messages such as "open_socket_ast invoked with bad IOSB 556:
device timeout" can be triggered by device sharing, such as another
host system printing to the same printer. If the idle timeout setting
(or the equivilent mechanism on the other host) is set to a relatively
large value, the host with the long timeout can block this and other
hosts from accessing the printer. It is possible that the symbiont
within the printer itself has its own timeout setting or is otherwise
busy and is simply not responding sufficiently quickly. (Accordingly,
this IOSB 556 error can be benign, and you can silence the OPCOM
message using the NO_OPCOM logical name described below, or by by
adjusting the idle timeouts involved.) It is also possible that the
IOSB 556 device timeout error indicates a network problem, as this
error can also result from IP routing problems between the host and
the printer. (Use ping, and also try to telnet into the printer.)
Also consider the settings of the NO_OPCOM and LOG_KEEP logical names.
NO_OPCOM is a binary value, with 1 to disable OPCOM messages and 0 to
enable reception:
$ DEFINE/SYSTEM/EXECUTIVE UCX$TELNETSYM_NO_OPCOM 1
$ DEFINE/SYSTEM/EXECUTIVE TCPIP$TELNETSYM_NO_OPCOM 1
LOG_KEEP is set to the number of log files to retain:
$ DEFINE/SYSTEM/EXECUTIVE UCX$TELNETSYM_LOG_KEEP 9
$ DEFINE/SYSTEM/EXECUTIVE TCPIP$TELNETSYM_LOG_KEEP 9
Check your TCP/IP Services documentation for further details.
--
HP JetDirect Printer Banner Page Settings:
For information on the configuration, commands, and operation of
HP (formerly known as Hewlett-Packard) printers, and for information
on the current printer firmware version and on any associated upgrade
procedures, please contact HP.
Various HP JetDirect printer settings can clearly become involved
in how the printer functions, and some of the printer settings are
visible via the HP JetAdmin tool (and some are reportedly not visible
via JetAdmin).
One of the most common items folks wish to control is the printer's
own printer-generated banner page. This printer-generated banner
displays the user, host, class and job, and generally looks similar
to the following:
User: user
Host: host
Class: Job: job
Using the instructions for accessing the printer via telnet that
are listed in an earlier section, enter the following administrative
commands directly to the HP printer:
> banner: 0
> quit
Other JetAdmin options are visible via the ? (help) command at the
dead private (>) prompt.
Other potential options that can be involved include the processing
of linefeed (LF) and carriage return (CR) in the printer -- please
contact the printer vendor for assistance with this configuration,
and determining how to alter these settings.
--
Web-based Printer Management Interface:
Some printers may have an integrated web (httpd) server for printer
configuration and control, check the printer documentation for the
printers capabilities. If the printer does contain a web server as
a management interface, direct your web browser at the printer's IP
address and management port. Check the printer documentation for the
IP port -- common httpd ports include 80 and 8080 -- and for details
of the management interface.
--
Typical Postscript Printing:
With Postscript printers on OpenVMS, the usual approach is the DCPS
package. DCPS has raw TCP/IP support in V1.4 and later -- the telnet
symbiont also uses the raw TCP/IP support. (Postscript printers use
bidirectional communications, and the DPCS-OPEN PAK enables support
for a variety of third-party Postscript printers.)
--
If you need and do not have a default device control library (which is
the case if a device control library has not already been created for
you on your system), you can create one with the command:
$ LIBRARY/CREATE/TEXT SYS$COMMON:[SYSLIB]SYSDEVCTL.TLB
--
With HP 4000 and HP 5000 series printers (and with printers in general),
ensure that the printers are running the latest firmware version. In
the case of the HP 4000 and HP 5000 series printers, these printers must
be running a minimum of "19980714 MB3.68". This firmware version number
is displayed as Firmware Datecode on the printer's configuration page.
(Also see topic (1797).)
If you are using an HP 4000 or HP 5000 or derivative printer -- or a
printer from another vendor -- please contact the vendor and request
the proper firmware version. The HP 4000 and HP 5000 series printer
firmware is shipped by HP on a SIMM, and this SIMM is installed directly
into the printer.
Depending on the operation of the symbiont and the speed at which the
printer is available, TCP/IP Services may need to back off and retry
the submission -- having a value that is too large can result in delays
before jobs are started, and having a value that is too low can result
in polling overhead. To adjust the interval, use the following:
$
$! Tell the LPD symbiont to retry the job every minute:
$! on V5.0 and later...
$ DEFINE/SYSTEM/EXECUTIVE TCPIP$LPD_RETRY_INTERVAL "0 00:01:00.00"
$! on versions prior to V5.0...
$ DEFINE/SYSTEM/EXECUTIVE UCX$LPD_RETRY_INTERVAL "0 00:01:00.00"
--
Printer Character or Print Format Interpretation Problems:
If you are seeing printouts sent to a remote printer appearing with all
output sent to the printer printed on a single line on the paper, then
check the settings of the target LPD or (more commonly) Telnet server
in use -- if you are printing via a DECserver terminal server, then see
the SET PORT commands for the TELNET CLIENT and the TELNET SERVER.
(Topic (4811) here in the Ask The Wizard area has an introduction to
these particular DECserver settings.) How other servers on other boxes
might control these particular settings will vary.
Most printers have programmable settings for the interpretation of tab
characters, line feeds (see previous, and see topic (4811)), and other
such characters. Also, most printers will maintain these and other
settings for the active print job, the default settings when the printer
powers up, and the factory default settings for the printer. In other
words, if the printer is not printing your output correctly -- but the
data is getting to the printer -- then you will need to check the
printer-specific documentation for details, commands, and options that
will be available to you. And once you get it working, make sure you
save the settings.
If you wish to control the settings of the printer as part of the
processing of the OpenVMS print job, you will need a device control
library. (See the section on device control libraries for additional
details.)
--
HP PCL Landscape Printing:
For HP PCL landscape processing, as well as an example of a device
control library and a device control library module, see topic (5271).
--
The TCP/IP Services package typically expects to use the host name of
a printer. If this name is not known via DNS, you will want to declare
it via a command such as the TCP/IP Services management utility (TCPIP
on V5.0 and later, UCX on earlier releases) command:
SET HOST "printer.domain.etc"/alias="myhost"/address=w.x.y.z
--
As for some of the various discussions of the HP LaserJet 4000 series
printers, other HP printers, and various IP printing discussions, please
see the following topics here in the Ask The Wizard area:
(546), (919), (1020), (1429), (1781), (1797), (2221), (2267),
(2276), (2312), (2631), (2696), (2771), (3041), (3280), (3960),
(4045), (4811), (5271), (5431), (5737), (6975), (8175), etc.
This topic (1020) is the best starting point. | http://h71000.www7.hp.com/wizard/wiz_1020.html | CC-MAIN-2015-48 | en | refinedweb |
XYZ stock price and divided history are as follows; Year-----------Beginning-of-year-Price 2010----------$100 2011---------$110 2012---------$90 2013----------$95 Dividend paid at Year End are that 2010 is $4, 2011 is $4, 2012 is $4 and 2013 is $4. 1,What are the arithmetric average rate of return and the geometric average rate of return?? these answer should be in perxentage and accurate to the hundredths. 2,Suppose that an investor buys three shares of XYZ at the beginning of 2010, buys another two shares at the beginning of 2011, sells one share at the beginning of 2012, and sells all four remaining shares at the beginning of 2013. What is the dollar-weighted rate of return (IRR)? Your answer should be in percentages and accurate to the hundredth | http://www.chegg.com/homework-help/questions-and-answers/xyz-stock-price-divided-history-follows-year-beginning-year-price-2010-100-2011-110-2012-9-q3535597 | CC-MAIN-2015-48 | en | refinedweb |
Doubts regarding Hashtable - Java Beginners
information,
Thanks... it possible to create a hashtable like this?
java.util.Hashtable hashtable=new...(12,13,10,1));
since we get the key of hashtable from the database.
When I tried
Java hashtable
Java hashtable What is hash-collision in Hashtable and how it is handled in Java
Java collection -Hashtable
Java collection -Hashtable What is Hashtable in java collection?
Java collection -Hashtable;-
The hashtable is used to store value... {
public static void main(String [] args){
Map map = new Hashtable
Lang and Util Base Libraries
Lang and Util Base Libraries
The Base libraries provides us the fundamental features and functionality of
the Java platform.
Lang and Util Packages
Lang and Util package provides the fundamental classes and Object of
primitive type
Java hashmap, hashtable
Java hashmap, hashtable When are you using hashmap and hashtable
util packages in java
util packages in java write a java program to display present date and after 25days what will be the date?
import java.util.*;
import java.text.*;
class FindDate{
public static void main(String[] args
Java Util Package - Utility Package of Java
Java Util Package - Utility Package of Java
Java Utility package is one of the most commonly used packages in the java
program. The Utility Package of Java consist
Java Hashtable Iterator
be traversed by the Iterator.
Java Hashtable Iterator Example
import java.util.*;
public class hashtable {
public static void main(String[] args) {
Hashtable hastab = new Hashtable();
hastab.put("a", "andrews
hashtable java swing
hashtable java swing i m getting this warning
code is here
Hashtable nu=new Hashtable();
Hashtable ns=new Hashtable();
nu.put(new...
mber of the raw type Hashtable
plz help me
Java Collection : Hashtable
Java Collection : Hashtable
In this tutorial, we are going to discuss one of concept (Hashtable ) of
Collection framework.
Hashtable :
Hashtable.... When you increase the entries in the Hashtable, the product of
the load
Java Util Examples List
examples that demonstrate the syntax and example code of
java util package... Java Util Examples List - Util Tutorials
The util package or java provides many
How to find hashtable size in Java?
How to find hashtable size in Java? Hi,
What is the code for Hashtable in Java? How to find hashtable size in Java?
Give me the easy code.
Thanks
hashtable - Java Beginners
hashtable pls what is a hashtable in java and how can we use... to Roseindia");
Hashtable hash = new Hashtable();
hash.put("amar","amar");
hash.put...://
Thanks.
Amardeep
Hashtable java prog - Java Interview Questions
Hashtable java prog Create a Hashtable with some students hall... the results? please provide the java code detaily for this query?thanks... bal;
Hashtable table = new Hashtable();
table.put( new Integer(1111),"Selected - Java Interview Questions
://
Thank you for posting...Java By default, Hashtable is unordered. Then, how can you retrieve
Hashtable elements in the same order as they are put inside? Hi hasNext
Java hasNext()
This tutorial discusses how to use the hasNext() method... Iterator. We are going to use
hasNext()
method of interface Iterator
in Java... through the following java program. True is return by this method in case
The Hashtable Class
The Hashtable Class
In this section, you will learn about Hashtable and its implementation with
the help of example.
Hashtable is integrated... for complete list of Hashtable's method.
EXAMPLE
import java.util.*;
public
util
java - Java Beginners
java write a programme to to implement queues using list interface Hi Friend,
Please visit the following link:
Thanks
Data Structures in Java
Data Structures in Java
In this Section, you will learn the data structure of java.util
package with example code.
Java util package(java.util) provide us...
Hashtable
Properties
After the release of Collections in Java 2 release
Associate a value with an object
with an object in Java util.
Here, you
will know how to associate the value... of the several extentions
to the java programming language i.e. the "...;}
}
Download this example
java - Java Beginners
Java hashtable null values How to get if there is null value in the Java hashtable? and is it possible to accept both null and values
java.util - Java Interview Questions
* WeakHashMapLearn Java Utility Package at... description of java.util PackageThe Java util package contains the collections framework..., internationalization, and miscellaneous utility classesThe util package of java provides
java - Applet
://
Thanks...java what is the use of java.utl Hi Friend,
The java
java persistence example
java persistence example java persistence example
J2ME HashTable Example
J2ME HashTable Example
To use the HashTable, java.util.Hashtable package must be
imported into the application. Generally HashTable are used to map the keys to
values
java
java why data structures?
The data structures provided by the Java utility package are very powerful and perform a wide range...:
Enumeration
BitSet
Vector
Stack
Dictionary
Hashtable
Properties
These classes
Collections in Java
Collections in Java are data-structures primarily defined through a set of classes and interface and used by Java professionals. Some collections in Java that are defined in Java collection framework are: Vectors, ArrayList, HashMap
java - Java Interview Questions
information :
Thanks
Java Collection
Java Collection What are Vector, Hashtable, LinkedList and Enumeration
Java collection
Java collection What are differences between Enumeration, ArrayList, Hashtable and Collections give a simple example for inheritance in java
java - Java Beginners
Define what is Vector with an example?why it is used and where it is used
Define HashTable ? how can we enter the Keys and Values in HashTable ?
would u give the example source code for it
thanks
krishnarao
VECTOR collection
Java collection What are differences between Enumeration, ArrayList, Hashtable and Collections and Collection
Java
Java I want to practise with Java Recursive program
The recursive functions are the function which repeats itself in a java program. Here is an example of recursive function that finds the factorial of a number
java
java what is an interface? what is an interface?
... but cannot be instantiated.
For more information, visit the following link:
Java Interface Example
Java Syntax - Java Beginners
to :
Thanks...Java Syntax Hi!
I need a bit of help on this...
Can anyone tell
JAVA
JAVA plz send me code, How to find fare form one place to another place using Java,Jsp,Servlets?
for example:i need to calculate from bangalore to gulbarga..i need to claculate the bus fare,distance in kms
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/20661 | CC-MAIN-2015-48 | en | refinedweb |
Spring.
[10]. It maps a particular URL pattern to a list of filters built up from the
bean names specified in the
filters element, and combines them in
a bean of type
SecurityFilterChain. The
pattern
attribute takes an Ant Paths and the most specific URIs should appear first
[11].
ConcurrentSessionFilter, because it doesn't use any
SecurityContextHolder functionality but needs to update
the
SessionRegistry to reflect ongoing requests
from the principal)
Authentication processing mechanisms -
UsernamePasswordAuthenticationFilter,
CasAuthenticationFilter,
BasicAuthenticationFilter etc - so that the
SecurityContextHolder can be modified to contain a valid
Authentication request token
The
SecurityContextHolderAwareRequestFilter, if you are
using it to install a Spring Security aware
HttpServletRequestWrapper into your servlet container
The
JaasApiIntegrationFilter, if a
JaasAuthenticationToken is in the
SecurityContextHolder this will process the
FilterChain as the
Subject in the
JaasAuthenticationToken
authentication processing mechanism updated the
SecurityContextHolder, and the request presents a cookie
that enables remember-me services to take place, a suitable remembered
Authentication object will be put there
AnonymousAuthenticationFilter, so that if no earlier
authentication processing mechanism updated the
SecurityContextHolder, an anonymous
Authentication object will be put there
ExceptionTranslationFilter, to catch any Spring
Security exceptions so that either an HTTP error response can be returned or an
appropriate
AuthenticationEntryPoint can[12]...[14].>
[10] Note that you'll need to include the security namespace in your application
context XML file in order to use this syntax. The older syntax which used a
filter-chain-map is still supported, but is deprecated in favour of
the constructor argument injection.
[11] Instead of a path pattern, the
request-matcher-ref attribute
can be used to specify a
RequestMatcher instance for more powerful
matching
[12] You have probably seen this when a browser doesn't support cookies and the
jsessionid parameter is appended to the URL after a
semi-colon. However the RFC allows the presence of these parameters in any path
segment of the URL
[13] The original values will be returned once the request leaves the
FilterChainProxy, so will still be available to the
application. | http://docs.spring.io/spring-security/site/docs/3.1.2.RELEASE/reference/security-filter-chain.html | CC-MAIN-2015-48 | en | refinedweb |
My windows server 2012 currently seems to have gone on holiday and is ignoring the SSL certificate I have asked it to use for Remote Desktop connections via
wmic /namespace:\\root\CIMV2\TerminalServices PATH Win32_TSGeneralSetting Set SSLCertificateSHA1Hash="cb727b4dca34651444afbd939555e6c65f8434c4"
where cb727b4dca34651444afbd939555e6c65f8434c4 is the thumbprint of an existing valid SSL certificate that was generated via Group Policy from a local CA.
The server continues to issue a self signed certificate every time I delete the previous self-signed one from the "Local Computer -> Remote Desktop" certificate store. The RDP connection info continues to say "The identity of the remote desktop is verified by Kerberos" instead of "verified by a certificate"
So yes, it's ignoring the group policy I have setup as well as directly via WMI. The output of
Get-WmiObject -class "Win32_TSGeneralSetting" -Namespace root\cimv2\terminalservices -Filter "TerminalName='RDP-tcp'"
is
__GENUS : 2
__CLASS : Win32_TSGeneralSetting
__SUPERCLASS : Win32_TerminalSetting
__DYNASTY : CIM_ManagedSystemElement
__RELPATH : Win32_TSGeneralSetting.TerminalName="RDP-Tcp"
__PROPERTY_COUNT : 20
__DERIVATION : {Win32_TerminalSetting, CIM_Setting, CIM_ManagedSystemElement}
__SERVER : OVERWATCHD
__NAMESPACE : root\cimv2\terminalservices
__PATH : \\OVERWATCHD\root\cimv2\terminalservices:Win32_TSGeneralSetting.TerminalName="RDP-Tcp"
Caption :
CertificateName : OVERWATCHD.labs.xxx.co.nz
Certificates : {0, 0, 0, 0...}
Comment :
Description :
InstallDate :
MinEncryptionLevel : 3
Name :
PolicySourceMinEncryptionLevel : 1
PolicySourceSecurityLayer : 1
PolicySourceUserAuthenticationRequired : 1
SecurityLayer : 2
SSLCertificateSHA1Hash : CB727B4DCA34651444AFBD939555E6C65F8434C4
SSLCertificateSHA1HashType : 2
Status :
TerminalName : RDP-Tcp
TerminalProtocol : Microsoft RDP 8.0
Transport : tcp
UserAuthenticationRequired : 0
WindowsAuthentication : 0
PSComputerName : OVERWATCHD
It was working a month ago. Any ideas how I can troubleshoot this? The windows event logs doesn't seem to give much information. If only there was a debug/verbose flag I could set.
You shouldn't need to delete the self-signed certificate to get Windows to use your CA generated certificate. It's possible Windows needs that self-signed cert for other non-RDP related things as well.
What group policy are you using to generate the certificate? Is it the auto-enrollment policies? Have you verified that the certificate whose thumbprint you're referencing actually exists on the target system in the Local Computer's store?
If it was working a month ago and not now, might the certificate have been renewed? If so, it would have a new thumbprint value.
If the right certificate is there, you should also check that it has a valid private key. There would be a little key icon on top of the normal cert icon in the Certificates snap-in. It would also say that in the certificate details window similar to this screenshot:
By posting your answer, you agree to the privacy policy and terms of service.
asked
2 years ago
viewed
1173 times
active
4 months ago | http://serverfault.com/questions/473484/windows-server-2012-remote-desktop-session-ssl-certificate | CC-MAIN-2015-48 | en | refinedweb |
I have been staring at this previous years exam question for hours now and I just can't get it working correctly. If anyone can help it would be greatly appreciated as I have an exam on Friday!
Code java:
import java.util.*; public class Huffman{ public static void main(String[] args) { Scanner in = new Scanner(System.in); System.out.print("Enter your sentence: "); String sentence = in.nextLine(); String binaryString=""; //this stores the string of binary code for(int i=0; i < sentence.length(); i++) { int decimalValue = (int)sentence.charAt(i); //convert to decimal String binaryValue = Integer.toBinaryString(decimalValue); //convert to binary for(int j=7;j>binaryValue.length();j--) { binaryString+="0"; //this loop adds in those pesky leading zeroes } binaryString += binaryValue+" "; //add to the string of binary } System.out.println(binaryString); //print out the binary int[] array = new int[256]; //an array to store all the frequencies for(int i=0; i < sentence.length(); i++) { //go through the sentence array[(int)sentence.charAt(i)]++; //increment the appropriate frequencies } PriorityQueue < Tree > PQ = new PriorityQueue < Tree >() ; //make a priority queue to hold the forest of trees for(int i=0; i<array.length; i++) { //go through frequency array if(array[i]>0) { //print out non-zero frequencies - cast to a char System.out.println("'"+(char)i+"' appeared "+array[i]+((array[i] == 1) ? " time" : " times")); //FILL THIS IN: //MAKE THE FOREST OF TREES AND ADD THEM TO THE PQ //create a new Tree //set the cumulative frequency of that Tree //insert the letter as the root node //add the Tree into the PQ } } while(PQ.size()>1) { //FILL THIS IN: //IMPLEMENT THE HUFFMAN ALGORITHM //when you're making the new combined tree, don't forget to assign a default root node (or else you'll get a null pointer exception) //if you like, to check if everything is working so far, try printing out the letter of the roots of the two trees you're combining } Tree HuffmanTree = PQ.poll(); //now there's only one tree left - get its codes //FILL THIS IN: //get all the codes for the letters and print them out //call the getCode() method on the HuffmanTree Tree object for each letter in the sentence //print out all the info } public class Tree implements Comparable<Tree> { public Node root; // first node of tree public int frequency=0; public Tree() // constructor { root = null; } public int compareTo(Tree object) { // if(frequency-object.frequency>0) { //compare the cumulative frequencies of the tree return 1; } else if(frequency-object.frequency<0) { return -1; //return 1 or -1 depending on whether these frequencies are bigger or smaller } else { return 0; //return 0 if they're the same } } String path="error"; public String getCode(char letter) { //FILL THIS IN: //How do you get the code for the letter? Maybe try a traversal of the tree //Track the path along the way and store the current path when you arrive at the right letter return path; //return the path that results } } class Node { public char letter; //stores letter public Node leftChild; // this node's left child public Node rightChild; // this node's right child } // end class Node }
I have included the entire problem! Help with any part would be greatly appreciated! | http://www.javaprogrammingforums.com/%20algorithms-recursion/24084-huffman-algorithm-help-printingthethread.html | CC-MAIN-2015-48 | en | refinedweb |
DXGK_CHILD_STATUS structure
The DXGK_CHILD_STATUS structure contains members that indicate the status of a child device of the display adapter.
Syntax
typedef struct _DXGK_CHILD_STATUS { DXGK_CHILD_STATUS_TYPE Type; ULONG ChildUid; union { struct { BOOLEAN Connected; } HotPlug; struct { UCHAR Angle; } Rotation; #if (DXGKDDI_INTERFACE_VERSION >= DXGKDDI_INTERFACE_VERSION_WDDM1_3) struct { BOOLEAN Connected; D3DKMDT_VIDEO_OUTPUT_TECHNOLOGY MiracastMonitorType; } Miracast; #endif }; } DXGK_CHILD_STATUS, *PDXGK_CHILD_STATUS;
- Type
A member of the DXGK_CHILD_STATUS_TYPE enumeration that indicates the type of status being requested.
- ChildUid
An integer, created previously by the display miniport driver, that identifies the child device for which status is being requested.
- HotPlug
- Connected
If Type is equal to DXGK_CHILD_STATUS_TYPE.StatusConnection, indicates whether the child device has external hardware (for example, a monitor) connected to it. A value of TRUE indicates that hardware is connected; FALSE indicates that hardware is not connected.
- Rotation
- Angle
If Type is equal to DXGK_CHILD_STATUS_TYPE.StatusRotation, indicates the angle of rotation of the display connected to the child device.
- Miracast
Supported by WDDM 1.3 and later display miniport drivers running on Windows 8.1 and later.
- Connected
If Type is equal to DXGK_CHILD_STATUS_TYPE.StatusMiracast, indicates whether a Miracast connected session has started. A value of TRUE indicates that a new monitor has been connected to the Miracast sink, or that the Miracast session has started with a monitor connected. FALSE indicates that the monitor that was connected to the Miracast sink has been unplugged, or that the Miracast session has been stopped.
For more info, see Wireless displays (Miracast).
- MiracastMonitorType
If the Connected member of the Miracast embedded structure is TRUE, indicates the connector type of the connection between the Miracast sink and the monitor or TV.
Alternately, if Connected is TRUE and the Miracast sink is embedded in the monitor or TV, the display miniport driver should set this value to D3DKMDT_VOT_MIRACAST.
If the driver doesn't know the monitor connection state, it should set this value to the last monitor connection state from the D3DKMDT_VIDEO_OUTPUT_TECHNOLOGY enumeration that it reported to the operating system.
For more info, see Wireless displays (Miracast).
Requirements
See also
- DxgkDdiQueryChildRelations
- DxgkDdiQueryChildStatus
- DxgkCbIndicateChildStatus
- DXGK_CHILD_STATUS_TYPE
- D3DKMDT_VIDEO_OUTPUT_TECHNOLOGY
Send comments about this topic to Microsoft | https://msdn.microsoft.com/en-us/library/windows/hardware/ff561010 | CC-MAIN-2015-48 | en | refinedweb |
#include <gtk/gtk.h> GtkCurve; GtkWidget* gtk_curve_new (void); void gtk_curve_reset (GtkCurve *curve); void gtk_curve_set_gamma (GtkCurve *curve, gfloat gamma_); void gtk_curve_set_range (GtkCurve *curve, gfloat min_x, gfloat max_x, gfloat min_y, gfloat max_y); void gtk_curve_get_vector (GtkCurve *curve, int veclen, gfloat vector[]); void gtk_curve_set_vector (GtkCurve *curve, int veclen, gfloat vector[]); void gtk_curve_set_curve_type (GtkCurve *curve, GtkCurveType type);
GObject +----GInitiallyUnowned +----GtkObject +----GtkWidget +----GtkDrawingArea +----GtkCurve
GtkCurve implements AtkImplementorIface and GtkBuildable.
"curve-type" GtkCurveType : Read / Write "max-x" gfloat : Read / Write "max-y" gfloat : Read / Write "min-x" gfloat : Read / Write "min-y" gfloat : Read / Write
"curve-type-changed" : RunCurve widget allows the user to edit a curve covering a range of values. It is typically used to fine-tune color balances in graphics applications like the Gimp.
The GtkCurve.
typedef struct _GtkCurve GtkCurve;
The GtkCurve struct contains private data only, and should be accessed using the functions below.
void gtk_curve_reset (GtkCurve *curve);
Resets the curve to a straight line from the minimum x and y values to the maximum x and y values (i.e. from the bottom-left to the top-right corners). The curve type is not changed.
void gtk_curve_set_gamma (GtkCurve *curve, gfloat gamma_);
GTK_CURVE_TYPE_FREE.
FIXME: Needs a more precise definition of gamma.
void gtk_curve_set_range (GtkCurve *curve, gfloat min_x, gfloat max_x, gfloat min_y, gfloat max_y);
Sets the minimum and maximum x and y values of the curve.
The curve is also reset with a call to
gtk_curve_reset().
void gtk_curve_get_vector (GtkCurve *curve, int veclen, gfloat vector[]);
Returns a vector of points representing the curve.
void gtk_curve_set_vector (GtkCurve *curve, int veclen, gfloat vector[]);
Sets the vector of points on the curve.
The curve type is set to
GTK_CURVE_TYPE_FREE.
void gtk_curve_set_curve_type (GtkCurve *curve, GtkCurveType type);
Sets the type of the curve. The curve will remain unchanged except when changing from a free curve to a linear or spline curve, in which case the curve will be changed as little as possible.
"curve-type"property
"curve-type" GtkCurveType : Read / Write
Is this curve linear, spline interpolated, or free-form.
Default value: GTK_CURVE_TYPE_SPLINE
"curve-type-changed"signal
void user_function (GtkCurve *curve, gpointer user_data) : Run First
Emitted when the curve type has been changed.
The curve type can be changed explicitly with a call to
gtk_curve_set_curve_type(). It is also changed as a side-effect of
calling
gtk_curve_reset() or
gtk_curve_set_gamma(). | http://maemo.org/api_refs/5.0/5.0-final/gtk/GtkCurve.html | CC-MAIN-2014-10 | en | refinedweb |
Hi,
On Tue, May 15, 2012 at 7:58 PM, Julian Reschke <julian.reschke@gmx.de> wrote:
> This isn't sufficient; there are API signatures where even a "[1]" on the
> final path segment is forbidden, such as when creating new nodes.
>
> We can either hack these restrictions into NodeImpl, SessionImpl and
> friends, or extend the path mapper to optionally check.
Couldn't we just pass them as is down to oak-core, where such names
can be rejected as invalid by a commit hook? Or if an exception is
required before save(), I'd rather have such checks explicitly in
NodeImpl & friends instead of overloading the path mapper with extra
responsibilities.
IMO it should be possible for the path mapper to be a no-op whenever
no session-local namespace re-mappings are in place. In such cases as
long as a path doesn't contain any {expanded}names, the mapper
shouldn't need to do any other parsing and can return the exact same
path string instance it received.
A separate path resolver (see my other post about the mapper/resolver
distinction) should take care of evaluating the mapped path string in
a specific context. Such resolution would for example involve finding
the parent node of the relative path provided to an addNode() call.
In cases like addNode(), neither the path mapper nor the resolver
should have to worry about rules like whether an index is allowed in
the last path element. Instead the relevant code should look something
like this:
// Map and validate the last name segment of the given path
String jcrName = PathUtils.getName(relPath);
String oakName = nameMapper.getOakName(jcrName);
if (oakName == null || isInvalidNodeName(oakName)) {
throw new RepositoryException("Invalid node name: " + relPath);
}
// Map and resolve the relative path leading to the new node
String jcrPath = PathUtils.getParentPath(relPath);
String oakPath = ssessionDelegate.getOakPathOrThrowNotFound(jcrPath);
NodeDelegate parent = dlg.getChild(oakPath);
if (parent == null) {
throw new PathNotFoundException(relPath);
}
BR,
Jukka Zitting | http://mail-archives.apache.org/mod_mbox/jackrabbit-oak-dev/201205.mbox/%3CCAOFYJNaM+tEw+Uqrk7btOxd=JmmdbPbROqTHq=gygzwjUrBvYQ@mail.gmail.com%3E | CC-MAIN-2014-10 | en | refinedweb |
I’ve made the acquaintance of a group of data analysts here in the triangle and have agreed to arrange an outing to the Durham Bulls minor league baseball team. Because it’s for stat nerds and because I was curious, I went looking for some baseball data to analyze. I found loads of it here, but soon got distracted by the presence of NFL statistics. The season is already well underway, but I thought it might be fun to try and build a predictive model for the sport.
The first step is to get some data. Here, I use an R function to pull HTML tables from the site.
GetGamesHistory = function(FirstYear = 1985, LastYear = 2011) { games.URL.stem = "" for (year in FirstYear:LastYear) { URL = paste(games.URL.stem, year, "/games.htm", sep="") games = readHTMLTable(URL) dfThisSeason = games[[1]] # Clean up the df dfThisSeason = subset(dfThisSeason, Week!="Week") dfThisSeason = subset(dfThisSeason, Week!="") dfThisSeason$Date = as.character(dfThisSeason$Date) dfThisSeason$GameDate = mdy(paste(dfThisSeason$Date, year)) year(dfThisSeason$GameDate) = with(dfThisSeason, ifelse(month(GameDate) <=6, year(GameDate)+1, year(GameDate))) if (year == FirstYear) { dfAllSeasons = dfThisSeason } else { dfAllSeasons = rbind(dfAllSeasons, dfThisSeason) } } dfAllSeasons = dfAllSeasons[,c(14, 1, 5, 7, 8, 9)] colnames(dfAllSeasons) = c("GameDate", "Week", "Winner", "Loser", "WinnerPoints", "LoserPoints") dfAllSeasons$Winner = as.character(dfAllSeasons$Winner) dfAllSeasons$Loser = as.character(dfAllSeasons$Loser) dfAllSeasons$WinnerPoints = as.integer(as.character(dfAllSeasons$WinnerPoints)) dfAllSeasons$LoserPoints = as.integer(as.character(dfAllSeasons$LoserPoints)) dfAllSeasons$ScoreDifference = dfAllSeasons$WinnerPoints - dfAllSeasons$LoserPoints dfAllSeasons = subset(dfAllSeasons, !is.na(ScoreDifference)) return (dfAllSeasons) }
Created by Pretty R at inside-R.org
So I wrote this code about a week ago and already I can see that I don’t like it. For one, I try to avoid using loops in R unless absolutely necessary. Often, I’ll start out with one just to get going, but usually I find that they can be replaced with one of the apply functions or something similarly succinct. Two, I need to better understand the behavior of the readHTML function. I remember having gone a couple rounds with the points data, which is read in as a factor. This leads to the extremely ugly bit of code where I convert it to a character and then to an integer. If anyone has a better way, I’m all ears. Three, I need to revisit the basic idea of extracting columns by name. Extraction by number is dangerous and confusing. Finally, I’d like to revise the data cleansing so that it lists the game with home, visitor and winner listed. That would make it easier to test whether or not a home field advantage exists.
All that understood, the code works and gives me piles of data. How I look at it will be the subject of the next... | http://www.r-bloggers.com/pro-football-data/ | CC-MAIN-2014-10 | en | refinedweb |
Collection was added to Java with J2SE 1.2 release. Collection framework is provided in 'java.util.package'.
All collections framework contains the following three parts :
Interfaces : Interfaces allow collections to be manipulated independently of the details of their representation.
Implementations i.e. Classes: These are the concrete implementations of the collection interfaces. Also, they are reusable data structures.
Algorithms: These are the methods that perform useful computations, such as searching and sorting, on objects that implement collection interfaces. The same method can be used on many different implementations of the appropriate collection interface.
The main benefits of Collection Framework are :
1. The implementations for the fundamental collections (dynamic arrays, linked lists, trees, and hash tables) are highly efficient.
2. It allows different types of collections to work in a similar manner and with a high degree of interoperability.
3. It allows the integration of standard arrays into the Collection Framework.
The Collection Interface is the base on which the Collection framework is
built. The methods declare by this Interface can be used by any collection. The
UnsupportedOperationException can be thrown by many of these methods.
These methods are summarized below :
In the below example, the method "add" of Collection Interface is implemented by various classes :
import java.util.*; public class CollectionInterfaceDemo { public static void main(String[] args) { List AL = new ArrayList(); AL.add("ANKIT"); AL.add("DIVYA"); AL.add("MANISHA"); System.out.println("Elements of ArrayList"); System.out.print("\t" + AL); List LL = new LinkedList(); LL.add("ANKIT"); LL.add("DIVYA"); LL.add("MANISHA"); System.out.println(); System.out.println("Elements of LinkedList "); System.out.print("\t" + LL); Set HS = new HashSet(); HS.add("ANKIT"); HS.add("DIVYA"); HS.add("MANISHA"); System.out.println(); System.out.println("Elements of set"); System.out.print("\t" + HS); Map HM = new HashMap(); HM.put("ANKIT", "8"); HM.put("MAHIMA", "24"); HM.put("DIVYA", "31"); HM.put("MANISHA", "12"); HM.put("VINITA", "14"); System.out.println(); System.out.println("Elements of Map"); System.out.print("\t" + HM); } }
Output :
Liked it! Share this Tutorial
Ask Questions? Discuss: Visiting Collection's Elements - Java Tutorials
Post your Comment | http://www.roseindia.net/javatutorials/visiting_collections_elements.shtml | CC-MAIN-2014-10 | en | refinedweb |
Combining JAAS and SOAP in a log on system
Iain Emsley
Ranch Hand
Joined: Oct 11, 2007
Posts: 60
posted
Mar 28, 2008 08:28:00
0
I've found myself stuck in a horrible mire with trying to set up a Web Service.
I've been asked to implement a Java calendaring system onto website which is largely backended in Perl. The easiest way into getting the passwords is also via Perl. I have set up a SOAP server which is passing across the username and password and Java client which receives it. I've been experimenting with JAAS which would appear to be the most flexible way of getting the desired result, i.e. our users can seemlessly move across the services on the site (effectively cookie controlled single sign on).
I've placed the client code inside of the WebCallbackHandler (though I perhaps ought to split it out and refer to it) but is there a better way of achieving what I want?
import java.util.Iterator; import java.io.*; import javax.security.auth.callback.*; import javax.servlet.ServletRequest; import javax.xml.soap.*; import javax.xml.parsers.*; import org.w3c.dom.*; import org.w3c.dom.Node; public class WebCallbackHandler implements CallbackHandler { private String userName; public WebCallbackHandler (ServletRequest request) { userName = request.getParameter("userName"); } public void handle(Callback[] callbacks) throws java.io.IOException, UnsupportedCallbackException { //Add user name and password from callbacks String arg1 = ""; String operation = "findingit"; String urn = "emailfind"; String destination = ""; try { //new connection SOAPConnectionFactory factory = SOAPConnectionFactory.newInstance(); SOAPConnection soapconn = factory.createConnection(); // Next, create the actual message MessageFactory messageFactory = MessageFactory.newInstance(); SOAPMessage message = messageFactory.createMessage(); SOAPPart soapPart = message.getSOAPPart(); SOAPEnvelope envelope = soapPart.getEnvelope(); // This method demonstrates how to set HTTP and SOAP headers. // setOptionalHeaders(message, envelope); // Create and populate the body SOAPBody body = envelope.getBody(); // Create the main element and namespace SOAPElement bodyElement = body.addChildElement( envelope.createName(operation, "ns1", "urn:"+urn)); // Add parameters bodyElement.addChildElement("cookie").addTextNode(arg1); // Save the message message.saveChanges(); //receive the message SOAPMessage reply = soapconn.call(message, destination); // Retrieve the result soapPart = reply.getSOAPPart(); envelope = soapPart.getEnvelope(); body = envelope.getBody(); Iterator iter = body.getChildElements(); Node resultOuter = ((Node) iter.next()).getFirstChild(); Node result = resultOuter.getFirstChild()); String userName = result.toString(); // Close the connection soapconn.close(); }//try catch (Exception e) { System.out.println(e.getMessage()); } }//end handle }
Have I gone hopelessly wrong in my quest to log our users on? I'd be grateful for some pointers as to how to get the best result for my users.
Thanks.
Peer Reynders
Bartender
Joined: Aug 19, 2005
Posts: 2906
posted
Mar 28, 2008 10:46:00
0
I'm not quite sure what kind of authentication/authorization/security scheme you are trying to implement - however WS-Security was designed for SOAP (implemented by
Rampart
for Axis2). See
Web Services Authentication with Axis 2
.
"Don't succumb to the false authority of a tool or model. There is no substitute for thinking."
Andy Hunt,
Pragmatic Thinking & Learning: Refactor Your Wetware
p.41
Iain Emsley
Ranch Hand
Joined: Oct 11, 2007
Posts: 60
posted
Apr 04, 2008 09:10:00
0
I'm trying to extend our single sign on system across to Java but for reasons I haven't yet worked out, we cannot parse the cookie data via Java but need Perl to do it (which it already does for the rest of the site). Hence the need to speak to the SOAP server. What I want to do is to use the servlet/JSP to get the password so that JAAS or
JDBC
can get the data for authentication, hence why I'm trying to get it to activate our built client.
subject: Combining JAAS and SOAP in a log on system
Similar Threads
Binding XML to Java from SOAP message
Webservice Help!
Output as null
Servlet using SAAJ and HTTP Authorization Header
Servlet using SAAJ and HTTP Authorization Header
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/224888/Web-Services/java/Combining-JAAS-SOAP-log-system | CC-MAIN-2014-10 | en | refinedweb |
GREPPER
SEARCH
SNIPPETS
USAGE DOCS
INSTALL GREPPER
All Languages
>>
Whatever
>>
comfort in the OT
“comfort in the OT” Code Answer
comfort in the OT
whatever by
Fantastic Finch
on Nov 17 2021
Comment
0
Bible verse comfort Psalm 23:4. Even though I walk through the darkest valley, I will fear no evil, for you are with me; your rod and your staff, they comfort me. Romans 8:38-39 No, in all these things we are more than conquerors through him who loved us
Add a Grepper Answer
Whatever answers related to “comfort in the OT”
the weeknd
what to know about living in afton wyomning
why do men drink liquor
it's so late in the night
timo kempf latein
IT With Lyam
big tiddy goth
magnus showed up shirtless and late
What happened to Grandpa and what does mom decide to do? the medicen bag
the rock
the nights
pink sweat at my worst
see at a glance meaning
traitor town
why do pigeons hiss
incredible lyrics m-beat
ABC TO FLEX BATTLE IM POOR
Whatever queries related to “comfort in the OT”
comfort in the ot
More “Kinda” Related Whatever Answers
View All Whatever Answers »
invisible item frames
how to make item frames invisible
how to center div with transform
bulma text-center
align image to the center of the screen
flex real meaning
stretch background image to fit div
gba screen dimensions
how to remove table border line
how to remov the line in table in html
love2d set screen size
Vertical viewport was given unbounded height.
Responsive codes
select2 increase height
centre div
godot get screen size
pen display
gridview builder
grid template column auto fill
center text v-card
prevent textarea resize css
how to restrict user from resize textarea
4k resolution
keep div at the bottom of page
bulma align image center in column
how to get div in the middle of body
flex froggy level 24 solution
modal center of screen
lock textarea size
centre align image in div
grid auto columns
grid css repeat auto-fit minmax
how to align a content at the middle of a div with translate
p5 get screen width
open jframe in center of screen
image center in div
move text to center of page flexbox
tailwind flex space between
giving height full in next image
give height to Image in nextjs
managing media in django
why my media query is not working
verticle line css
font-weight tailwind
how to put jframe in center of screen using jframe properties
vertical center item in grids
how to give font style in outsystems
box shadow border only show left side
mui center element
center text streamlit
how to make element stick to bottom
media query for mobile min and max width both
two condition inside media query
two conditions inside media query
modile media query css
align item to top of div
remove resize from textarea
editable div
text align center not aligning vertically
set width of jqgrid
Set Right Alignment in JTable Column
media query for mobile and tablet
fixed textarea
headerTitleAlign
how to indent the whole block of code in cscode?
full width and height iframe
ifame center
page anchor tag below header
position something to the center of screen responsivly
how to center an image bulma
how to horizontally and vertically center content in css
grid direction
2 types of orientation ?
yii2 gridview header text align center
tailwind flex align items center
Text Overflow
css content unicode-list
why align-items is not working
align items not working flex
body width full
add 2 div in same line
bring tkinter window to front
How to make an image fill its container without stretching
css image 100 without stretch
centrer une div
vuetify card height of parent
how to center fpdf
What is the output of this code? <div style="width:200px"></div> <script> $(function() { $("div").css("padding", "5px"); alert($("div").innerWidth()); }); </script>
What is the value of outerHeight() for the paragraph after the following code?
Rohan has a piece of cloth that measures 3.5 metres. How many smaller pieces can he make of each measuring 50 cm in length?
how to put a white line at bottom of v-container
mode of right skewed data
fluid that fills the cell called
photoshop scale same ratio
flex max slang meaning
how to get the length of element with display none
zoom image inside div and move
how to make toolstrip vertical
seventeen divided by seventeen
tip tap insert text
seventeen divided by 17
loopland media
how to change the windows logo in the left corner
when are the mid terms
flexbox align right and left
set window size love2d
get prefferred height of vertical layout group unity
letitia wright height
apple.overlap (water, collect);
how many lines does a a4 piece of paper have
zoom out iframe content to fit
full hd resolution
bulma align image vertical center in column
stretch div to full height
css fill parent height
flex force div right side
float right flex
how to get height from hypontenuse
constraint width percent
vertical align image in div
how to align span to right
remove h1 upper margin
max length of url
margin order xamarin forms
how to center content using v-banner
make a paragraph fit in div
horizontal vs vertical
padding top ratio
how to calculate volumetric weight for sea freight
ight
mixed content: the page at
min max font size
viewport
difference between id and class in css
how to center an image in css
image align center
computer resolution sizes
set width bread builder voyager
align form to center of page
4x4 centers in wrong place
is the content-length header necessary
includegraphics size
v-card-text center
Viewport fullsize
css make div full screen height
can't find lenght
relative position div is overlapping fixed or sticky header
put header at bottom of div
align bottom
center a div on screen
margin 30px auto
tailwind font weight
tailwind text bold
how to write a text inside a div
capacitor screen orientation
obs display capture black screen
@media for all devices
show element at center of page
difference between align-content and align-items
Max. Post Size
100gb in mb for partition
css textarea not resizable
how to calculate focal length in pixels
table font size
section
adobe premiere change frame size
set footer element at bottom
maxheight media query
hr style height
how to set a div size to full screen
make a div full screen css
flex all child same width
A RenderFlex overflowed by 237 pixels on the bottom.
center wrapped flex children
vertical line hr
flexbox align last item right
svg text align middle
virtualbox screen size too small
size of the milky way
A RenderFlex overflowed by pixels on the bottom.
::before content
canvas wrap text
align items left after flex-direction row reverse
centering
cellpadding
how to change size of front in eclipse shortcut
how to change font in eclipse ide
Font size eclipse
portrait monitors
Div and span
tablet media query
align a div in center of another div
how to increase the screen size in virtualbox
how change title font size in tab navigator
make an element be at right side of screen
tabs full in tab layout
client_max_body_size
tailwind object fit cover
2k dimensions
put text in center of a div
setting height and width on body
horizontal scalability
make div fullscreen
make my web page adjust with screen size well
how to put imageview in the center linearlayout
set iframe height and width
center text in div container
how to align text ot center
how to center a text in css
stick text to top of div css
move the element to top in inline block div
get coordinates from number in grid
hr within table
center position fixed element
WIDTH PROPERTIES
set element equal to the size of the viewport
set the height equal to screen or view port
linus sebastian height
vertical scaling vs horizontal scaling
embed maps responsive
iframe maps responsive
contentchildren vs viewchildren
swal2 change font size
table row border
how to assign something in center inside a div
center html element
center image in div
viewholder
breakpoints responsive design
jumbotron height
how to align text inside a touchable opacity
tailwind justify between
h2 looks bigger than h1
resize slick
iframe center
h5 default font-size px
on hover zoom card
increase div size on hover css
flex align children to side
tail 100 lines
center div tailwind
home row
flex align items
div in div
center div
vertically align any inline elements
flexbox container width of content
flexbox shrink parent
css grid make all columns same width
Center an iframe
css doesn't work on flask
make <hr> bigger in boootstrap
how to make the picture in iframe full screen
Full Screen Iframe
ggtitle center
Changing Highcharts font size
minimum font size mobile
Content-Type
Make div bottom page
salesforce lightning textarea height
jumbotron sizing
class card img top smae height
required field css
how to access object keys which have spaces in them
bulma is-grouped-right
underline height
mips div
html document height
yii2 gridview row color
how to set jlabel in center
video height and width auto set
dropdown content left
brad traversy
traversy media
grid minmax
hide div elment
border of image take the size of div
xrandr can't open display
how to change the layout of view in mvc
What are parent and child elements
td visibility hidden
latex get figure in subsection
make figure fit in subsection
how to center a form in bulma
make textarea not resizable
ios safari make div fill screen
ios safari screen height
get only content from tiny mce
preventing textarea resizing
how to resize image in bulma
vertical text wpf
center absolute position
flex send last item to the end
foot to cm
show ellipsis after text length
Paper size a4
Get the position of a div/span tag
Align widget
View binding
wpf grid border
how to increase size of sf symbol
center absolute element
facebook cover photo size
how to center a div
flex max item per row
place div side by side
make multiple div inline
append element in a div as first child
nth last of type
iframe 100 height of content
how to add div in new line
div center
antimatter dimensions setinterval
fxflex responsive example
column each child padding
getting the size of a gameobject
nebeneinander anordnen css
page break in dompdf
cs go framerate cap
nebeneinander
how to put container in center of page
ul no indent
Adding and removing style attribute from div with jquery
Adding Multiple style attribute from div with jquery
resize browser window in selenium
resize browser window
how to resize window
how to resize browser
how to change size of the window
how to have only one vertical border after a column in bootstrap table
how to position fixed inside a div
decrease div size
align absolute div center
making an image resize evenly with window
how to put a h1 tag with underline ::after
give table width 100%
resize background image to size of div
setup a div bordered in android layout
responsive
qtablewidget align center
agm zoom to marker
how to align a component in the middle of page
viertical align div inside page
how to center a element in flex
timberpost inside twig
iframe auto resize
what does position relative do ?
linear gradient not covering entire page
common phone aspect ratios
align div to right side of parent
arrange elements in a grid in html and css
Get width of screen
html5 video fit width
Align p, div, container in center in html
width : double.infinite
vertical align div
media query min and max width
how to make a div into center
how to center a text element
how to make bar width smaller in any chart
bootstrap height and width sizes
class="d-block w-100"
default value width
display: block;
add text to onchilddraw
tailwind text justified
basic pentesting 1 walkthrough
take mobile and name in tawk widget
Get React Native View width and height
oneplus 6t screen resolution
two p in one line left and right in material ui
make canva responsive
Align elements in a row and horizontally centered in their column
jumping to anchor in page heading problem
000webhost hide branding
def Center point of acircle
html incliude all css from folder
to make div take remaining space
windows froms textalign to middle of parent
how to put two items befside each other using flexbox
beamer insert section slide
find element with capybara overflow hidden
selenium grid docker with 8 container
vertical aling center flexbox
increase width of mat-sidenav
vertically center a modal
prene user from resizing Jframe
how to make a window not resizeable in JFrame
java jframe disable resize
padding margin and border
uitableview separator width
positioning text in a grid div
tex overscribe right errow
vertical align css
reset browser font-size defaults in css
chrome browser extension popup max height
height form the top to the element
como centralizar uma div
center
css resizing image with window
media screen hp
height auto width 100% will ignore width and height
displat grid overlapping columns
media screen smartphone
virtualbox increase disk size
facebook banner size desktop and mobile
margin
how to make a div element further from the top
change default pageable size
how to make div top right
deobfuscate css
how to auto screen size with virtualbox
centering isotope elements
file input size feature
Make grid paper
height vs depth
foot to inch
Only fullscreen opaque activities can request orientation
Create Boxes Around Text
how to format textbox thousand separator
grid template areas
stack divs without absolute positioning
page so small on mobile
how to stack divs html
css mobile font size too small
meta line of code html to format to mobile devices
how to make responsive svg
v-data-table-header-mobile
overflow
overflow , one div
dimensions of an iphone x
responsive image using media query
data center
change br line height
display:flex
what is inline flex
inline-block
llatex element of
max height
flex grow
display grid
flexbox
limit offset
aws free hosting
Leverage the font-display CSS feature to ensure text is user-visible while webfonts are loading
hr tag
flexible content
media query methods
mg hector full form
Largest Contentful Paint
have two views evenly linearlayout
Document doesn't use legible font sizes
how do you make an elemet style
outline letters in sass
emacs auto indent not working
ms word not including the entire caption in table of figures
libreoffice writer hide corners
numeric overflow
how to copy html element with css from a website
mjml center image on mobile
<hr>
bmw x3 2020 dimensions cm
draw a line with pseudo elements
set count of section
owl-spacing (M,L,XL)
ag-Grid: tried to call sizeColumnsToFit() but the grid is coming back with zero width, maybe the grid is not visible yet on the screen?
inclure html ds html avec <object> -- et style css
html - Comment centrer horizontalement un élément
edittext set max length programmatically
exact media screen
timber first letter uppercase
how to measure hdd size
what is repainting of the screen
labelimage side pannel dis appered
javafx listview can't change size after vbox
what is text-justify in css
grid isn't working on resize
summernote disable resize
what is default block size in windows ?
style browse
find element by text in between span
middle finger
how to jump on a block using code
is mobile first still relevant
w2s diss
center side to side p5js
astra pro default width 1200
what's mean of #DIV/0!
CSS hide first li separator on each line - responsive horizontal css menu
how to make a div take the width of the preceding div
Target small devices
primeng autocomplete width
nursing pads
bold on hover but not add padding or distance
to cut a box in cs
Iframe creates extra space below
if width > length the biggest dimension = width if height > width thenbiggest_dimension = height end_if elsebiggest_dimension = length if height > length thenbiggest_dimension = height end_if end_if
<applet code="NEWCPUSim.class" width=600 height=329> </applet>
@media not (all and (device-width: 2560px) ) { .table-div{ width: 104%; } }
how to increase the size of thetable cell in famdon
ListItemText style
ScreenToWorldPoint
mp select bed size
font sixe sss
can you use flex inside of grid
get app bar size
how to align swing elements
quasar text in center
convert elment in div to textarea with newline
eq height
usually placed in footer
whitespace take 2 lines
how to calculate the area of an elipse
box-sizing border-box not working
get size widget renderbox
top 5 majors
change caret visual studio
moto service center in gaya
an outer value of 'this' is shadowed by this container
pcl set point size
resize attribute css
Image size of 79817x90725 pixels is too large. It must be less than 2^16 in each direction.
parent to child communication
Height para texformfield
how to center botton
div[role="textbox"] { position: absolute; width: 1px; height: 1px; padding: 0; margin: -1px; overflow: hidden; clip: rect(0, 0, 0, 0); white-space: nowrap; border: 0; }
how to rotate screen on lenovo s145
ada data alignment
wpf textblock content alignment
text box expands as you type
infinite grid with tunnels where cost is K
css
cold fusion get first element of query row
webflow duplicate element
fluid format number
resolution for 22 inch
facebook pixel add below body
apply style to only children not grandchildren
use display inline and display flex at the same time
capacitor spalsh screen
textarea split on newline
lp show default media
can't adjust thead element's height
what is sG in fluid dynamics
How to set TextView textStyle such as bold, italic
flex
centerWithBootstrap1000times watched
css - ocultar texto
tk::PlaceWindow . center
underscore in footnote
ul ohne punkten
How do you translate from iterator-style positions to size_type-style positions?
resizer in vertical
cliptopadding
image view auto size gridlayout
cursor is wide visual studio
Rating with placeholder
Relatively position an element without it taking up space in document flow
Query the Western Longitude (LONG_W) for t
how to display text on ti-82
face centered cunic cell
flexbox directions
how to apply grid size change transition
horizontal list css
max file size
alignmentSieve ATACseq
borderpane width and height
mediapipe intellisense vs
full hd
Fullscreen method
Direct children
div after rotation some part not showing
table padding
vertical line
position of webelement
photoshp show 100% zoom
textfeild height adjust with lable height material ui
style rule that displays the footer only when both margins are clear of floating objects
how reduce the height of footer in astra
boreder collie height average
How can I change the font size of a label of a lightning field
.heading:not(:last-child) not working
how to test mediatr
foot
istio weight routing
if a directive called 10000 times screen getting struck
logic in css
wrap with widget
change span lines into X html, css, javaskript
span next to each other
allocate space in DRAM in spatial
relation between mediaquery breakpoint vmin vmax vh ciewport
nsetools fno lot size of index
align two buttons in same line
lvresize 100 free
the center tag actually works
criteo banner sizes
how to make some content of invoice seen in print media but not in screen
Row size too large (> 8126)
containerizing IIS
how to display the long message on a standard ISPF screen
Fullscreen method: Property 2
international Content
overlay.style.display:'grid';
does margin collapse work in the x axis
pdf make want to break page after four tables dynamically
modern orchestra size
using grid and flex together
uk grid system
align items in li
col-md-3 lines are not proper
count number of lines in csv without opening it
css in django
col-md gap remove
What is the importance of Earth’s Grid?
indent block section netbeans
div position by default
how does jazz use the pentatonic scale
advance logic in css
ol style none
how to remove border from text box in word
edge list tables
html colgroup
justify second line of ul
gtk-rs grid layout container
css hide all elements after nth
The Flex Time features accessed bt the track header of the main window by doing:
spacy all tag list
Fullscreen method: Property 1
comfort in the OT
nth term of gp
dynamic inject css property
differencve between flex and inline flex
why bodyparser is underline
at line 5, character 3The box-sizing property isn't supported in IE6 and IE7.
send response from iframe to parent
adjust padding view
rdkit mol to grid common core aligned
TH
my body shows on small screens
multi-styles
make a table cells overflow hidden?
how to show span above
How to use dimens.xml
if in csh
print a box like the ones below
How to limit homes in essentialsX
size of sparta
centralize button in container fluid or container
divi pricing table how to get them all the same size
Which position will keep the element in the same place of the screen regardless of scrolling?
why website is not opening 100% width on mobile
newspaper article segment into different parapgraphs stackoverflow
center content in div
col flex antd
jspdf text position
acrilic css 1
how to increase the size of result window in processng
textview does not display whole text
cs
responsive layout for phones
large blobs
textarea with border qml
vertical bar
ubuntu 20.04 can't use resize window option
flexin
rdkit mol to grid common core aligned
how to change the text style to capitalise in msword
blank space on the right side html
fig.show closes immediately
object property with space
container vs container fluid
align self center container vue
best freelacing fields
mac join csvs
measure view height android
popular class name layouts
rig
stacking div over another div
south park the fractured but whole
what does it mean block content in pug layout
how to center in htom
how to set media path and media root
html two elements on same line
positioning pymsgbox
windows forms set tablelayoutpanel cell size
20x30 inches in pixels
duurzaamheid
contenti in evidenza widows
Aligning different items in the navbar separately
What do you understand by the Stateful and Stateless widgets?
check browser support for css value
make div' width expand with content
how to find percentage of the height of website
flexbox gap polyfill
no media query ancestor
freecad pad offset
float vs inline
centralize div col
select all paragraph that contains image in css
reshape IML matrix
what happens when max children is reached
execcommand adds divs
div overlaps when using 100vh
ms word change length of tab
MEN'S LIGHTWEIGHT RECYCLED POLY-FILL PACKABLE JACKET
how to change the font of panel in delphi in code
size product
is display-block a class
squarespace mobile/desktop
increase width in template in fandom
auto add ... to textview when text to long
wofür steht alt auf der tastatur
geographical term for the shape of the land surface
img and paragrap in flexbox
how to move a block of code to the left
how to change font size in osu forum
how to move a row in word
target id in media query
styling links within a class except the last one
text size
itemize text indent
media queries
margin between elements
Screen position out of view frustum
Add the padding values, so that it has 10 pixels to the top, 15 pixels to the bottom, 5 pixels to the right, 10 pixels to the left:
avarge dick size
no bounding box yolo tesnorflow detection
Setting the same style for all controls.
reshape SAS matrix
image cut by div border
vertical alignment xaml
cv-card-title center text
padding
how to auto fit image in div
userform startup position multiple monitors
wrap categoryAxis label
ahk borderless fullscreen
query selector with semi colon
Grouping components in figma
increase svg thickness
scratch width and height
v-for max
counter-style css
how to add left right center to ckeditor 4
how to vertically align input fields
how to center text in osu forum
centralizar componente na div
split screen
font sizze xss
innodb page size change
how zoom png size inside of button
media query for mobile in react file
test for over/underfitting
Create a measure for custom text if sales is blank
ABC TO FLEX BATTLE IM POOR
how to add a phone number to my footer in divi
If you give a negative value to the "top" property, in which direction will the box be moved?
font resizewebsite
what is a Stevenson Screen and why do we use it.
owl-spacing--m
setting z index on before after pseudo classes
stanza four lines afrikaans
is it available move selected text right and left side
how to colpete duantless
calculate remaining height of body
maximum height formula straight up
how to make tge flex in your unorded list row
can I use headings in my review?
NTP Query
imagettftext word wrap
hide grid lines without removing scale in chartjs
inline block text align top
gridtemp
A RenderFlex overflowed by 230 pixels on the right
bootstrap, apply margin only on small screen
thymeleaf list size
index x out of bounds for length x
mat-table limit height
flex make width dynamic
mat grid tile align left and center
how to center div
how selenium grid looks like
xaml center label
position
unity app size
div tag in thml
how to adjust images inside card
how to customize highcharts
standard media query screen sizes
fiverr gig banner size
how to move the paragraphs in word
tyler one height
xml change font size of button text
space hostel
webix.event(window, "resize", function(e){ $$("layout").resize(); });
average grade 11 text collman lieu index
froala editor height auto
dospaly a div element on an external monitor
camel style programmin
position element with distance from center
how to align placeholder text swftui
offset_id
resize() { this.elementRef.nativeElement.style.height = '4px'; this.elementRef.nativeElement.style.height = this.elementRef.nativeElement.scrollHeight + 'p
latex text size
logan paul net worth
jake paul
markdown table
adblock
gold color code
github token
bootstrap navbar fixed top
color code for yellow
bootstrap center align columns
check postgres version
GridView in flutter
url shortener free
[core/no-app] No Firebase App '[DEFAULT]' has been created - call Firebase.initializeApp() flutter
uuid - npm
vim download
latex bullet points
list latex
electromagnetic spectrum
purple hex code
find my phone
tinder
.htaccess file
wordpress default htaccess
wordpress ht access file
arch
pm2 start npm start
pm2 start with name
flutter create new project
pm2 starts npm run start
find out process using port windows
myspace
flutter web
how to enable flutter web
flutter enable web command
angular style if
ngstyle
null safety error
Cannot run with sound null safety, because the following dependencies don't support
semantic ui cdn
flutter downgrade version
firebase update cloud method
firebase deploy only function
firebase cloud method update
firebase update cloud function
nginx 403 forbidden
2001 a space odyssey
watermark remover
pi symbol
overflow bootstrap
bootstrap overflow hidden
overflow hidden in bs4
windows 10 search bar not working
windows start menu is not working
google calculator'
npm concurrently
yarn concurrently
on back press in fragment
can you work overtime
code : uctrix
esxcel valori non poresenti in un elenco
india's first satellite naem
comentar codigo en visual studio code
first repeating character in a string
how to see how is connected to my wifi with nmap
Reverse Array Usign Stack
generate serial uuid with intelij
time complexity of algorithm calculator
reverse number in c
Your Flutter application is created using an older version of the Android embedding. It's being deprecated in favor of Android embedding v2. Follow the steps at
ableton live 10 lite seial code keygen
Can't create handler inside thread Thread[DefaultDispatcher-worker-1,5,main] that has not called Looper.prepare()
how to make a kills global leaderboard roblox
arduino string to char
installation of genymotion on ubuntu
donkey
The following untracked working tree files would be overwritten by merge: .idea/vcs.xml
graph representation
how to make circle image with cardview in android
grepper download
drupal 8 change theme default
definition of generic
Roblox Home
what is the <parent> tag maven
wp get author description
alt code for arrow pointing right
cheat seet traversy
hand calluses
name
pronunciation of incognito
reset select2 multi select
regex pattern that checks url
add i18n to vue 3
"913913fc-7d02-41b9-901e-3d4b451981d3"
textfield autocomplete off
how to share multiple images in kotlin
latex page layout
5alidshammout
d.drawLine(420, 530, 340, 470);
mongodb nested lookup
kotlin sleep
January 1 New Year's Day
optional arg discord py
numpy Original error was: DLL load failed: The specified module could not be found.
2415648743
yup validate date empty
proton not working black screen
error 503
Tcl get part of list
videoStream qt
stop tracking files git
k8s pod name env variable
188-131
how to get good at programming
is overriding only works with inherited methods?
sublime text 3 kod hizalama
what does server running on a computer means
ENOSPC
spark conf
os x yükleme virtualbox
cisco packet tracer ospf multi area
onfocus
ionic ios REST API CORS issue
@component @controller @repository @service
Can't resolve 'react-icons/fa'
replace in all files vscode
Could not find component agRichSelectCellEditor
how to do something old with among us
onclick stoppropagation
detect current route flutter
from google_images_download import google_images_download
set default value now() date
Driveblox unlimited
shit color hex
array reduce associative array php
nginx reload config
firebase auth dependency
dächen \ auf linux in terminal
run jest on single file
circumference equation
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.13: wanted
contemporaneo
Mysql driver for python flask app
mac tftp server directory
sublime text 3 find in files exclude folder
32 bit integer min value
Why is there red squigely line under future in flutter
modify axis ggplot2
Display PPT in HTML
see nohup output of a running process
shopify remove header and footer
matrices multiplication in matlab
slurm job specify output directory
git remote add wrong thing how to remove
discord token grabber
javascript loop aray
ValueError: PyCapsule_GetPointer called with incorrect name
wisard of oz
set android sdk directory from android studio
Unit nginx.service is masked
which is better milk or orange juice
king gizzard and the lizard wizard
how to send data through discord webhook using batch script
mongoid collection return only specific fields
vscode 80 character line
delete a packete force apt
ValueError: tuple.index(x): x not in tuple
CORS Socket io v2
windows cannot find npm-build.html
is flutter free
convert array to Illuminate\Http\Request
Implementation restriction: ContentDocumentLink requires a filter by a single Id on ContentDocumentId or LinkedEntityId using the equals operator or multiple Id's using the IN operator.
builddata quantmod
AWS CLI get Availability Zones
flutter web get url
run jar file with different jre
lvresize 100 free
yoyoyoyooyoyoyoy
who plays tari in victorious
view quicktime height
how to remove a screen session
vue tel input
suicide
http
samba the specified network password is incorrect
of EAX, EBX, ECX, EDX or EIP registers for the following questions
kaspersky
Elasticsearch max virtual memory too low
libboost_thread.so.1.72.0: cannot open shared object file: No such file or directory
alexflipnote
angel fish size
bootstrap font asesome cdn
breeze starter kit
tmux scroll
SDK location not found. Define location with an ANDROID_SDK_ROOT environment variable or by setting the sdk.dir path in your project's local properties file
no partition predicate found for alias
lodash uniqby
firefox disable background update
qt autocomplete
what is multithreading
esx shownotification fivem
wc show number of lines in a folder
npm generate component component name command skip-import
sync google drive with dropbox
9.11 x 10^-31
how to exit root in linux
my localhost working slow
error: the sandbox is not in sync with the podfile.lock. run
Password command failed: 535-5.7.8 Username and Password not accepted.
dr evil
indian navy day
subquery in sql
autoreconf: automake failed with exit status: 1 gpaste error
what is indian contituition
tradurre pagina google
WPF Confirmation MessageBox
science.com
how to make a page that auto redirect after a few seconds
vs code contains emphasized items
ps4 emulator
figure bachground matlab color
Which of the following is the correct way to create Calendar objec
2 the Dart compiler exited unexpectedly. Exited (1)
datarow to datatable
echo preserve \n
digit sum codechef
Target class [UserController] does not exist.
ip address regex validate
percentage similirty two words vba
how to add the new resource version fivem
flutter uses unchecked or unsafe operations
else clause in xslt
how to mark part of your sentence as spoiler in discord
gradle application system.in
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf1 in position 923: invalid continuation byte
delete shards in elasticsearch currl
cumulative some by date for each user
docker ps filter
vscode kite extention uninstall
shortcut for writing if condition
JUICE WRLD
command line rename folder mac
avast key generator
header access-control particular domain
why is there a massive health bar in the sky
what is the circulatory system cycle
copy clipboard with span
stroke thickness processing
select all paragraph that contains image in css
skyblock mod
Contact form 7 nach formular absenden redirect auf bestätigungsseite
Cannot read properties of undefined (reading 'navigate') agular
WARNING: cgroup v2 is not fully supported yet, proceeding with partial confinement
how to change state in a bottom sheet flutter
mongoose cursor eachasync
pow not working c
how to remove base_64 encoding of image in tinymce editor
. | https://www.codegrepper.com/code-examples/whatever/comfort+in+the+OT | CC-MAIN-2022-05 | en | refinedweb |
Scalable Vector Graphics (SVG) is one of the most widely used file image formats in applications. Because it offers several advantages over bitmap files, especially when it comes to retaining image quality while scaling, it’s difficult to start building a Flutter application without first considering using SVG.
In this article, you’ll learn how and when to use SVG files in a Flutter application.
Using SVG in Flutter
Skia, a 2D graphics library and core component of Flutter, can only serialize images into SVG files. Thus, decoding or rendering SVG images with Skia is not possible. Fortunately, the community comes to the rescue with a fantastic Dart-native package called flutter_svg that renders and decodes SVG quickly.
What is flutter_svg?
The flutter_svg package has implemented a picture cache that stores a
ui:Picture class. This is the
SkPicture wrapper of the Skia graphics engine, which records specific SVG rendering commands in binary mode.
The
ui.Picture class does not consume much memory and is cached to avoid repeatedly parsing XML files. Let’s take a look at the
SvgPicture.asset constructor:
SvgPicture.asset( String assetName, { Key? key, this.matchTextDirection = false, AssetBundle? bundle, String? package, this.width, this.height, this.fit = BoxFit.contain, this.alignment = Alignment.center, this.allowDrawingOutsideViewBox = false, this.placeholderBuilder, Color? color, BlendMode colorBlendMode = BlendMode.srcIn, this.semanticsLabel, this.excludeFromSemantics = false, this.clipBehavior = Clip.hardEdge, this.cacheColorFilter = false, }) : pictureProvider = ExactAssetPicture( allowDrawingOutsideViewBox == true ? svgStringDecoderOutsideViewBox : svgStringDecoder, assetName, bundle: bundle, package: package, colorFilter: svg.cacheColorFilterOverride ?? cacheColorFilter ? _getColorFilter(color, colorBlendMode) : null, ), colorFilter = _getColorFilter(color, colorBlendMode), super(key: key);
By looking at the implementation, you’ll notice that the stream notifications from
pictureProvider update the picture of
SvgPicture.
void _resolveImage() { final PictureStream newStream = widget.pictureProvider .resolve(createLocalPictureConfiguration(context)); assert(newStream != null); // ignore: unnecessary_null_comparison _updateSourceStream(newStream); }
In this code block, the stream from the
pictureProvider is populated with
ui.Picture by a completer of the picture cache.
PictureStream resolve(PictureConfiguration picture, {PictureErrorListener? onError}) { // ignore: unnecessary_null_comparison assert(picture != null); final PictureStream stream = PictureStream(); T? obtainedKey; obtainKey(picture).then<void>( (T key) { obtainedKey = key; stream.setCompleter( cache.putIfAbsent( key!, () => load(key, onError: onError), ), ); }, ).catchError((Object exception, StackTrace stack) async { if (onError != null) { onError(exception, stack); return; } FlutterError.reportError(FlutterErrorDetails( exception: exception, stack: stack, library: 'SVG', context: ErrorDescription('while resolving a picture'), silent: true, // could be a network error or whatnot informationCollector: () sync* { yield DiagnosticsProperty<PictureProvider>( 'Picture provider', this); yield DiagnosticsProperty<T>('Picture key', obtainedKey, defaultValue: null); })); }); return stream; }
Adding the flutter_svg plugin
To add this package to your Flutter dependencies, you can run:
flutter pub add flutter_svg
Alternatively, you can add flutter_svg to your
pubspec.yaml file:
dependencies: flutter_svg: ^0.22.0
Make sure that you run
flutter pub get either in your terminal or using your editor. Once installation is complete, import the package in your Dart code where you want to use this package:
import 'package:flutter_svg/flutter_svg.dart';
Using flutter_svg in your Flutter app
There are several ways to use this package, but we’ll cover the most common use cases.
One option is to load an SVG from an internal SVG file, which is typically stored in the
asset folder:
// example final String assetName = 'assets/image.svg'; final Widget svg = SvgPicture.asset( assetName, );
You can also load an SVG file from an URL, like so:
// example final Widget networkSvg = SvgPicture.network( '', );
Finally, you can load an SVG from an SVG code:
// example SvgPicture.string( '''<svg viewBox="...">...</svg>''' );
Extending SVG functionalities in Flutter
Once you have your SVG file loaded, the first step is to change the color or tint of the image:
// example final String assetName = 'assets/up_arrow.svg'; final Widget svgIcon = SvgPicture.asset( assetName, color: Colors.red, );
Using a semantics label helps to describe the purpose of the image and enhances accessibility. To achieve this, you can add the
semanticsLabel parameter. The semantic label will not be shown in the UI.
// example final String assetName = 'assets/up_arrow.svg'; final Widget svgIcon = SvgPicture.asset( assetName, color: Colors.red, semanticsLabel: 'A red up arrow' );
The flutter_svg package renders an empty box,
LimitedBox, as the default placeholder if there are no
height or
width assignments on the
SvgPicture. However, if a
height or
width is specified on the
SvgPicture, a
SizedBox will be rendered to ensure a better layout experience.
The placeholder can be replaced, though, which is great for improving the user experience, especially when loading assets via a network request where there may be a delay.
// example final Widget networkSvg = SvgPicture.network( '', semanticsLabel: 'A shark?!', placeholderBuilder: (BuildContext context) => Container( padding: const EdgeInsets.all(30.0), child: const CircularProgressIndicator()), );
In this example, I’ve chosen
CircularProgressIndicator to display a progress indicator while the image is loading. You may add any other widget that you wish. For example, you might use a custom loading widget to replace
CircularProgressIndicator.
Checking SVG compatibility with flutter_svg
You should know that the flutter_svg library does not support all SVG features. The package does, however, provide a helper method to ensure that you don’t render a broken image due to a lack of supported features.
// example final SvgParser parser = SvgParser(); try { parser.parse(svgString, warningsAsErrors: true); print('SVG is supported'); } catch (e) { print('SVG contains unsupported features'); }
Please note that the library currently only detects unsupported elements like the
<style> tag, but does not recognize unsupported attributes.
Recommended Adobe Illustrator SVG configuration
To use make the most use of flutter_svg with Adobe Illustrator, you need to follow specific recommendations:
- Styling: choose presentation attributes instead of inline CSS because CSS is not fully supported
- Images: choose to embed, not link, to another file to get a single SVG with no dependency on other files
- Objects IDs: choose layer names in order to add a name to every layer for SVG tags, or use minimal layer names — the choice is yours!
Rendering SVG files in another canvas
There may be times where you’d like to render your SVG into another canvas.
SVGPicture helps you easily achieve this:
// example final String rawSvg = '''<svg viewBox="...">...</svg>'''; final DrawableRoot svgRoot = await svg.fromSvgString(rawSvg, rawSvg); final Picture picture = svgRoot.toPicture(); svgRoot.scaleCanvasToViewBox(canvas); svgRoot.clipCanvasToViewBox(canvas); svgRoot.draw(canvas, size);
Conclusion
Using SVG files can be a great addition to your Flutter app, but SVG isn’t always the right answer to all of your image problems. It’s crucial to observe your use cases and measure your app and SVG performance continuously, as you might need to replace SVG with another standard image format, such as PNG or JPEG.
Although Flutter doesn’t support SVG natively, the Dart-native flutter_svg package has excellent performant and fast support for SVG files. The package is relatively simple to use, too.
Keep in mind that the package version is still below 1.0.0, which can break changes in API. However, the author has done a great job of keeping the API as stable as possible. When using flutter_svg, make sure you always check the last version of the package on pub.dev to stay up to date. Thanks for reading! | https://blog.logrocket.com/implement-svg-flutter-apps/ | CC-MAIN-2022-05 | en | refinedweb |
The:
OK, with some delicate soldering I can reclaim the two LED pins -- just remove the LEDs and solder wires in that place. That gives me 20 pins to work with. My keyboard has 88 keys. That means, that if I make a matrix 8×11, I can support them all with 19 pins, and even have one pin left for a LED or something. Yay. But 8×11 is not exactly how the keyboard looks physically -- it's more like 5.5×16 (some columns have 5 rows, some have 6). So, to get 8×11, I will have to transpose it and merge every two neighboring columns together. That's doable, it just means I will have fun time converting the layouts.
Now, let's look for some ready-to-use firmware, so that I don't have to do all this coding myself (not that it's very complicated, but I'm lazy). For that chip, this seems to be pretty popular:
First, I burned one of the example keyboards to the board with avrdude:
avrdude -p atmega32u4 -P /dev/ttyACM0 -c avr109 -U flash:w:gh60.hex
(you have to get it into the boot mode first by pressing reset right when it boots).
Then I connected some of the switches to some of the column/row pins, and pressed them -- and voila, it typed some letters! So the firmware works great.
Next, I will have to modify it to support my particular keyboard layout, with this almost square matrix. Looking at the matrix.c file in the examples, you can see code like:
/* Row pin configuration * row: 0 1 2 3 4 * pin: D0 D1 D2 D3 D5 */ static void unselect_rows(void) { // Hi-Z(DDR:0, PORT:0) to unselect DDRD &= ~0b00101111; PORTD &= ~0b00101111; }
Hmmm.... Does that mean I need to have all row pins on the same port? I need 8 rows, so that would be doable... Let's see... Nope. Whoever designed the Pro Micro, he or she left out a single pin from each port, so that no port has a complete set of pins broken out. Splendid. Let's look at the other examples...
OK, I can pretty much write anything I want in those functions, all I need is to initialize the pins I want and read them all into a single number with something like:
static uint8_t read_rows(void) { return (PIND&(1<<0) ? (1<<0) : 0) | (PIND&(1<<1) ? (1<<1) : 0) | (PIND&(1<<2) ? (1<<2) : 0) | (PIND&(1<<3) ? (1<<3) : 0) | (PIND&(1<<5) ? (1<<4) : 0) | (PINB&(1<<7) ? (1<<5) : 0); }Not very pretty, I bet I could write it nicer, but that should work.
OK, writing it all and writing the layout definition is going to take some time, but at least I know how to proceed. See you at the other end.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In. | https://hackaday.io/project/8282-alpen-clack/log/27395-matrix | CC-MAIN-2022-05 | en | refinedweb |
Dictionaries and Datagrams
January 24, 2001
This week XML-DEV examined two aspects relating to the textual encoding of XML: verbosity and multilingual elements.
Text Compression
It seems that many developers, used to dealing with binary data formats, are still uncomfortable about embracing text formats like XML. Yet the received wisdom is that switching to a binary format does not offer many advantages. Surprised by this counter-intuitive viewpoint, Mark Papiani caused XML-DEV to debate the merits of binary encodings.?
This prompted David Megginson to provide some anecdotal evidence supporting the use of text formats..
Eugene Kuznetsov observed that this is hardly surprising given that parse trees are generally larger than unparsed document instances..
Caroline Clewlow noted that the size increase is relative to the type of content being parsed. Adding quantitative data into the debate, David Megginson gave a worked example of binary versus text encoding. His long message is worth digesting in its entirety (Peter Murray-Rust would describe it as an "XML Jewel"), and its conclusions are worth repeating here:.
Acknowledging the verbosity of XML encoding, Ken North observed that there were many other significant issues to consider in distributed applications..
Noting the interoperability benefits of XML, Danny Ayers pointed out that it may also help decouple system components.
In my opinion the interoperability afforded by XML far [outweighs].
While it seems that there are few performance gains to be had from a binary XML format, there are grounds for compressing XML during data exchange. Mark Beddow highlighted redundancy, as one exploitable property of XML:=.
The apparent textual redundancy of xml tagging means tokenising compressors can really get to work...
The interested reader may care to look at a previous XML Deviant article, "Good Things Come In Small Packages," which reviewed an earlier discussion of compression and binary coding techniques. In addition to this, Stefano Mazzocchi's binary serialisation of SAX and Rick Jelliffe's STAX (Short TAgged XML) are worth examining. STAX is a lightweight compression technique, which Jelliffe believes could form one end of a spectrum of possible approaches.
I think it would be good to have (something like) this kind of ultra-low-end compression available (i.e. as a matter of compression negotiation), because I think many servers are [too].
Translation Dictionaries
One property of XML rarely commented upon is the language in which its schemas are expressed, though the Deviant reported on an earlier discussion relating to internationalisation of DTDs and Schemas ("Speaking Your Language"). The debate on XML verbosity prompted Don Park to raise the issue of long, verbose XML element names, noting that a standard like XML-DSIG is precluded from use in mobile, bandwidth-limited applications. This prompted Park to consider the use of abbreviated tag names, outlining some potential topics for discussion.
- should schemas be expanded or an alternate version be used?
- should a new namespace be defined or old namespace be reused?
- what role does RDDL play?
- should there be a dynamic abbreviation mechanism? (no, imho)
- how should abbreviated version of existing standards be created?
- should there be standard rules for abbreviating tag names?
Park later provided an example abbreviated version of the XML-DSIG DTD. This exchange moved Simon St. Laurent to share some thoughts on the use of 'dictionary resources', which have been suggested by the recent RDDL activity..
Eric van der Vlist was able to provide an example of what such a dictionary resource, or translation table, might look like.
The most promising aspect to this discussion is that while it covers some old ground (for XML-DEV at least), it comes at a time when the list is already collaboratively producing RDDL, arguably an important piece in the puzzle. While we may be covering ground already a year old, the potential for progress may be greater now, and there is more concrete experience available to draw upon. | https://www.xml.com/pub/a/2001/01/24/deviant.html | CC-MAIN-2022-05 | en | refinedweb |
Query
Page Settings Event Handler Delegate
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Represents the method that handles the QueryPageSettings event of a PrintDocument.
public delegate void QueryPageSettingsEventHandler(System::Object ^ sender, QueryPageSettingsEventArgs ^ e);
public delegate void QueryPageSettingsEventHandler(object sender, QueryPageSettingsEventArgs e);
type QueryPageSettingsEventHandler = delegate of obj * QueryPageSettingsEventArgs -> unit
Public Delegate Sub QueryPageSettingsEventHandler(sender As Object, e As QueryPageSettingsEventArgs)
Parameters
A QueryPageSettingsEventArgs that contains the event data.
Remarks.
For more information on printing, see the System.Drawing.Printing namespace overview. | https://docs.microsoft.com/en-us/dotnet/api/system.drawing.printing.querypagesettingseventhandler?view=windowsdesktop-5.0 | CC-MAIN-2022-05 | en | refinedweb |
Nested pie charts¶
The following examples show two ways to build a nested pie chart in Matplotlib. Such charts are often referred to as donut charts.
import matplotlib.pyplot as plt import numpy as np
The most straightforward way to build a pie chart is to use the
pie method.
In this case, pie takes values corresponding to counts in a group. We'll first generate some fake data, corresponding to three groups. In the inner circle, we'll treat each number as belonging to its own group. In the outer circle, we'll plot them as members of their original 3 groups.
The effect of the donut shape is achieved by setting a
width to
the pie's wedges through the wedgeprops argument.
fig, ax = plt.subplots() size = 0.3 vals = np.array([[60., 32.], [37., 40.], [29., 10.]]) cmap = plt.get_cmap("tab20c") outer_colors = cmap(np.arange(3)*4) inner_colors = cmap([1, 2, 5, 6, 9, 10]) ax.pie(vals.sum(axis=1), radius=1, colors=outer_colors, wedgeprops=dict(width=size, edgecolor='w')) ax.pie(vals.flatten(), radius=1-size, colors=inner_colors, wedgeprops=dict(width=size, edgecolor='w')) ax.set(aspect="equal", title='Pie plot with `ax.pie`') plt.show()
However, you can accomplish the same output by using a bar plot on axes with a polar coordinate system. This may give more flexibility on the exact design of the plot.
In this case, we need to map x-values of the bar chart onto radians of a circle. The cumulative sum of the values are used as the edges of the bars.
fig, ax = plt.subplots(subplot_kw=dict(projection="polar")) size = 0.3 vals = np.array([[60., 32.], [37., 40.], [29., 10.]]) #normalize vals to 2 pi valsnorm = vals/np.sum(vals)*2*np.pi #obtain the ordinates of the bar edges valsleft = np.cumsum(np.append(0, valsnorm.flatten()[:-1])).reshape(vals.shape) cmap = plt.get_cmap("tab20c") outer_colors = cmap(np.arange(3)*4) inner_colors = cmap([1, 2, 5, 6, 9, 10]) ax.bar(x=valsleft[:, 0], width=valsnorm.sum(axis=1), bottom=1-size, height=size, color=outer_colors, edgecolor='w', linewidth=1, align="edge") ax.bar(x=valsleft.flatten(), width=valsnorm.flatten(), bottom=1-2*size, height=size, color=inner_colors, edgecolor='w', linewidth=1, align="edge") ax.set(title="Pie plot with `ax.bar` and polar coordinates") ax.set_axis_off() plt.show()
References
The use of the following functions, methods, classes and modules is shown in this example:
Keywords: matplotlib code example, codex, python plot, pyplot Gallery generated by Sphinx-Gallery | https://matplotlib.org/3.4.3/gallery/pie_and_polar_charts/nested_pie.html | CC-MAIN-2022-05 | en | refinedweb |
Hi everyone! Today I'm excited to announce easymoney: opensource library for operating with monetary values in JavaScript and Typescript.
We publish the first stable release v1.0.0. In this post, we try to explain some sort of motivation and briefly describe what is ready today and what to expect from our future plans and roadmap.
About the library
easymoney is a library for operating with monetary values in JavaScript and Typescript. It's an implementation of a pattern Martin Fowler's Money Type from "Patterns of Enterprise Application Architecture".
It's an old and widely used pattern that is implemented in many other languages e.g.:
Highlights
First-class Typescript support
Support all standard math operations
Custom currencies support
Support big number values
Support crypto
Support formatting
Principles that we put in our library
As small as possible bundle size
Nearly 100 per cent coverage, proofing reliability
Clear communications and transparency within the community
In-depth
First-class Typescript support
The library is written in Typescript. We really like Typescript and try to achieve flexibility between reliability and simplicity in its usage.
Support all standard math operations
Addition
import { createMoney } from '@easymoney/money'; const money1 = createMoney({ amount: 100, currency: 'USD' }); const money2 = createMoney({ amount: 106, currency: 'USD' }); const money3 = money1.add(money2).getAmount(); // => 206
Multiplication
import { createMoney } from '@easymoney/money'; const money1 = createMoney({ amount: 100, currency: 'USD' }); const money2 = createMoney({ amount: 2, currency: 'USD' }); const money3 = money1.multiply(money2).getAmount(); // => 200
Supports all standard math operations. Subtraction, multiplication, division, and so on.
Custom currencies support
import { createCurrencyList } from "@easymoney/currencies"; const currencies = [{ minorUnit: 2, code: "XBT" }, { minorUnit: 5, code: "DXBT" }]; const list = createCurrencyList(currencies); list.contains("USD") // => false list.contains("XBT") // => true
Depending on the application, the user may want the list of currencies used by the application to contain other fields, such as the view field to display the symbol (e.g., ₿), or may want to operate with other currencies that are not on the ISO list at all (e.g., various cryptocurrencies). For this reason, we thought it's important to give users the flexibility to customize, if they wish, how they will present currencies in their application.
Support big number values
We also support numbers bigger than Number.MAX_SAFE_INTEGER
Support custom and crypto currencies
import { createMoneyCryptoFormatter } from "@easymoney/crypto-formatter"; import { createMoney } from "@easymoney/money"; const BTC = { code: "BTC", minorUnit: 8 }; const money = createMoney({ amount: 6, currency: BTC }); money.getCurrency(); // => { code: "BTC", minorUnit: 8 } const formattedValue = createMoneyCryptoFormatter().format(money); // 0.00000005BTC
We understand that the use of cryptocurrencies is on the rise, so we consider it necessary to give our users who work in this area a convenient API for their daily tasks. Now, out of the box we support only LTC, ETH, BTC, but we can expand this list in future releases.
Support formatting
Formatting ISO currencies with Intl.NumberFormat
import { createMoneyIntlFormatter } from "@easymoney/formatter" import { createMoney } from '@easymoney/money'; const money = createMoney({ amount: 5, currency: "USD" }); const money1 = createMoney({ amount: 50, currency: "USD" }); const formatted = createMoneyIntlFormatter().format(money); // => "$0.05" const formatted1 = createMoneyIntlFormatter() .format(money, "en-US", {minimumFractionDigits: 1, maximumFractionDigits: 1}); // => "$0.5"
Formatting cryptocurrencies
import { createMoneyCryptoFormatter } from "@easymoney/crypto-formatter" import { createMoney } from '@easymoney/money'; import { cryptoCurrenciesMap } from "@easymoney/currencies" const money = createMoney({ amount: 5, currency: "LTC" }); const formatted = createMoneyCryptoFormatter().format(money); // => "0.00000005LTC" const money1 = createMoney({ amount: 50, currency: cryptoCurrenciesMap.ETH }); const formatted1 = createMoneyCryptoFormatter().format(money); // => "0.000000000000000005ETH" const money = { amount: 5, currency: "ETH" }; const formattedValue = createFormatter().format(createMoney(money), { currencyPosition: -1 }); // => ETH0.000000000000000005
Modular api
Our library is divided into different packages. For example:
@easymoney/crypto-formatter –– crypto currency formatting;
@easymoney/formatter –– formatting ISO currencies using Intl.NumberFormat;
@easymoney/money –– work with monetary values with numbers that fit into Number.MAX_SAFE_INTEGER;
@easymoney/bignumber.js –– works with monetary values of any range, integrated with the library bignumber.js;
@easymoney/currencies –– works with any range of values integrated with bignumber.js library.
We tried to build the architecture so that the functionality of the library was available as much as possible by domain. This allows for maximum flexibility in the construction of third-party modules (about this below), as well as to have the smallest possible final bundle size so that you need only download the part of the functions that you require.
Reliability
We believe that good software is a reliable software. Therefore, in addition to types, there should be testing behind the guarantee of reliability. With this in mind, we pay a great deal of attention to testing. Most of the code is covered by unit tests, but we also use prop-based testing and a fast-check tool to brush up on possible unrecorded branches that are not always visible with conventional unit tests. We think that modern javascript has all the tools needed to ensure the reliability of the software being developed.
Also, we use codecov, to make sure the coverage doesn't decrease between releases.
Transparency
It's an opensource product, so the community must stay in the first place. So keeping this in mind, we want this product to be transparent with the community, so everyone can get quick feedback or find what they need. That's why we are going to pay a lot of attention to the documentation and to the quickest possible resolution of user problems.
For this we have taken the following steps:
- We have an open roadmap, where you can track the process of future features that you can suggest in issues.
- We try to explain all motivations in starting guides and a future series of articles will describe all possible functionality and problems. The first one is already written.
- We will keep detailed releases so you can always track changes in the library.
- We are going to use templates for issues so that you can find similar problems in the issue tracker without much effort.
We realize that there is still a lot of work ahead of us on documentation and on adding the necessary functionality for users. Now we are actively working on documentation and adding guides on how to use our library. In parallel, we will try to implement new features as far as possible. If you want to help, you always can ask to help on my Twitter and I will try to find some work for you and say thank you very much for your help.
Thank you
Thanks for reading the post and for your time. Big thanks to people who helped me to finish this project, especially Jan Janucewicz, who helped with integrating bignumber.js and made a great effort to tests and documentation.
If you find bugs, please report them on our Github issues. Alternatively, you can always ask me on Twitter.
Feel free to ask questions, to express any opinion, and discuss this from your point of view. Make code, not war. ❤️
Discussion (1)
So exciting! Congrats on your launch Andrey 😁 Also, great introduction. Seems like a useful library! | https://practicaldev-herokuapp-com.global.ssl.fastly.net/frolovdev/introducing-easymoney-41kj | CC-MAIN-2022-05 | en | refinedweb |
Modin (Pandas on Ray)¶
Modin, previously Pandas on Ray, is a dataframe manipulation library that allows users to speed up their pandas workloads by acting as a drop-in replacement. Modin also provides support for other APIs (e.g. spreadsheet) and libraries, like xgboost.
import modin.pandas as pd import ray ray.init() df = pd.read_parquet("s3://my-bucket/big.parquet")
You can use Modin on Ray with your laptop or cluster. In this document, we show instructions for how to set up a Modin compatible Ray cluster and connect Modin to Ray.
Note
In previous versions of Modin, you had to initialize Ray before importing Modin. As of Modin 0.9.0, This is no longer the case.
Using Modin with Ray’s autoscaler¶
In order to use Modin with Ray’s autoscaler, you need to ensure that the correct dependencies are installed at startup. Modin’s repository has an example yaml file and set of tutorial notebooks to ensure that the Ray cluster has the correct dependencies. Once the cluster is up, connect Modin by simply importing.
import modin.pandas as pd import ray ray.init(address="auto") df = pd.read_parquet("s3://my-bucket/big.parquet")
As long as Ray is initialized before any dataframes are created, Modin will be able to connect to and use the Ray cluster.
Modin with the Ray Client¶
When using Modin with the Ray Client, it is important to ensure that the cluster has all dependencies installed.
import modin.pandas as pd import ray import ray.util ray.init("ray://<head_node_host>:10001") df = pd.read_parquet("s3://my-bucket/big.parquet")
Modin will automatically use the Ray Client for computation when the file is read.
How Modin uses Ray¶
Modin has a layered architecture, and the core abstraction for data manipulation is the Modin Dataframe, which implements a novel algebra that enables Modin to handle all of pandas (see Modin’s documentation for more on the architecture). Modin’s internal dataframe object has a scheduling layer that is able to partition and operate on data with Ray.
Dataframe operations¶
The Modin Dataframe uses Ray tasks to perform data manipulations. Ray Tasks have a number of benefits over the actor model for data manipulation:
Multiple tasks may be manipulating the same objects simultaneously
Objects in Ray’s object store are immutable, making provenance and lineage easier to track
As new workers come online the shuffling of data will happen as tasks are scheduled on the new node
Identical partitions need not be replicated, especially beneficial for operations that selectively mutate the data (e.g.
fillna).
Finer grained parallelism with finer grained placement control
Machine Learning¶
Modin uses Ray Actors for the machine learning support it currently provides. Modin’s implementation of XGBoost is able to spin up one actor for each node and aggregate all of the partitions on that node to the XGBoost Actor. Modin is able to specify precisely the node IP for each actor on creation, giving fine-grained control over placement - a must for distributed training performance. | https://docs.ray.io/en/master/data/modin/index.html | CC-MAIN-2022-05 | en | refinedweb |
The Official Code42 Python API Client
Project description
py42, the official Code42 Python SDK
py42 is a Python wrapper around the Code42 REST APIs that also provides several other useful utility methods.
It is designed to be used for developing your own tools for working with Code42 data while avoiding the overhead
of session / authentication management.
Requirements
- Python 3.6.0+
- Code42 Server 6.8.x+ or cloud environment (e.g. console.us.code42.com or crashplan.com)
Installation
Run the
setup.py script to install the py42 package and its dependencies on your system.
You will likely need administrative privileges for this.
$ python setup.py install
Hello, py42
Here's a simple example to verify the installation and your server/account.
Launch the Python interpreter
$ python
Import a couple essentials
>>> import py42.sdk >>> import py42.util as util
Initialize the client.
>>> sdk = py42.sdk.from_local_account("", "john.doe", "password")
or alternatively
>>> sdk = py42.sdk.from_jwt_provider("", jwt_provider_function)
Get and print your user information.
>>> response = sdk.users.get_current() >>> util.print_response(response)
You should see something like the following:
{ "username": "john.doe", "orgName": "ACME Organization", "userId": 123456, "emailPromo": true, "licenses": [], "modificationDate": "2018-08-29T15:32:56.995-05:00", "blocked": false, "usernameIsAnEmail": true, "userUid": "1234567890abcdef", "userExtRef": null, "email": "john.doe@acme.com", "status": "Active", "localAuthenticationOnly": false, "orgUid": "123456789123456789", "passwordReset": true, "active": true, "creationDate": "2012-01-16T11:25:43.545-06:00", "orgType": "BUSINESS", "firstName": "John", "lastName": "Doe", "notes": null, "orgId": 123456, "quotaInBytes": -1, "invited": false }
Configuration
There are a few default settings that affect the behavior of the client.
To override these settings, import
py42.settings and override values as necessary before creating the client.
For example, to disable certificate validation in a dev environment:
import py42.sdk import py42.settings as settings import logging settings.verify_ssl_certs = False # customize logging custom_logger = logging.getLogger("my_app") handler = logging.FileHandler("my_app.log") custom_logger.addHandler(handler) settings.debug.logger = custom_logger settings.debug.level = logging.DEBUG sdk = py42.sdk.from_local_account("", "my_username", "my_password")
Usage
The SDK object opens availability to APIs across the Code42 environment, including storage nodes.
import py42.sdk sdk = py42.sdk.from_local_account("", "my_username", "my_password") # clients are organized by feature groups and accessible under the sdk object # get information about the current user. current_user = sdk.users.get_current() # page through all devices available to this user. for device_page in sdk.devices.get_all(): for device in device_page["computers"]: print(device) # page through all orgs available to this user. for org_page in sdk.orgs.get_all(): for org in org_page["orgs"]: print(org) # save a copy of a file from an archive this user has access to into the current working directory. stream_response = sdk.archive.stream_from_backup("/full/path/to/file.txt", "1234567890") with open("/path/to/my/file", 'wb') as f: for chunk in stream_response.iter_content(chunk_size=128): if chunk: f.write(chunk) # search file events from py42.sdk.queries.fileevents.file_event_query import FileEventQuery from py42.sdk.queries.fileevents.filters import * query = FileEventQuery.all(MD5.eq("e804d1eb229298b04522c5504b8131f0")) file_events = sdk.securitydata.search_file_events(query)
Additional Resources
For complete documentation on the Code42 web API that backs this SDK, here are some helpful resources:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/py42/ | CC-MAIN-2022-05 | en | refinedweb |
TZIP-12 Token Metadata
The
@taquito/tzip12 package allows retrieving metadata associated with tokens of FA2 contract. You can find more information about the TZIP-12 standard here.
How to use the tzip12 package
The package can act as an extension to the well-known Taquito contract abstraction.
- We first need to create an instance of
Tzip12Moduleand add it as an extension to our
TezosToolkit
The constructor of the
Tzip1212Module } from '@taquito/tzip12';const Tezos = new TezosToolkit('rpcUrl');Tezos.addExtension(new Tzip12Module());
Note that the
Tzip16Module and
Tzip12Module use the same
MetadataProvider. If you have already set
Tezos.addExtension(new Tzip16Module());, you can omit this step.
- Use the
tzip12function to extend a contract abstraction
const contract = await Tezos.contract.at("contractAddress", tzip12)
The compose function
The contract abstraction can also be extended to a
Tzip12ContractAbstraction and a
Tzip16ContractAbstraction (at the same time) by using the
compose function.
Thus, all methods of the
ContractAbstraction,
Tzip12ContractAbstraction and
Tzip16ContractAbstraction classes will be available on the contract abstraction instance.
import { compose } from '@taquito/taquito';const contract = await Tezos.contract.at('contractAddress', compose(tzip16, tzip12));await contract.storage(); // ContractAbstraction methodawait contract.tzip12().getTokenMetadata(1); // Tzip12ContractAbstraction methodawait contract.tzip16().getMetadata(); // Tzip16ContractAbstraction method
Get the token metadata
There are two scenarios to obtain the metadata of a token:
- They can be obtained from executing an off-chain view named
token_metadatapresent in the contract metadata
- or from a big map named
token_metadatain the contract storage.
The
getTokenMetadata method of the
Tzip12ContractAbstraction class will find the token metadata with precedence for the off-chain view, if there is one, as specified in the standard.
The
getTokenMetadata method returns an object matching this interface :
interface TokenMetadata {token_id: number,decimals: numbername?: string,symbol?: string,}
note
If additional metadata values are provided for a token_id, they will also be returned.
Here is a flowchart that summarizes the logic perform internally when calling the
getTokenMetadata method:
*Note: If there is a URI in the token_info map and other keys/values in the map, all properties will be returned (properties fetched from the URI and properties found in the map). If the same key is found at the URI location and in the map token_info and that their value is different, precedence is accorded to the value from the URI.
Example where the token metadata are obtained from an off-chain view
token_metadata
The same result can also be obtained by calling the off-chain view
token_metadata using the
taquito-tzip16 package:
Note that an off-chain view
all-tokens should also be present in the contract metadata allowing the user to know with which token ID the
token_metadata can be called.
Example where the token metadata are found in the big map
%token_metadata
note
To be Tzip-012 compliant, the type of the big map
%token_metadata in the storage of the contract should match the following type:
- Michelson
- JSON Michelson
(big_map %token_metadata nat(pair (nat %token_id)(map %token_info string bytes)))
prim: 'big_map',args: [{ prim: 'nat' },{ prim: 'pair', args: [{ prim: 'nat' , annots: ['%token_id']},{ prim: "map", args: [{ prim: 'string' }, { prim: 'bytes' }], annots: ['%token_info'] }] }],annots: ['%token_metadata']
Otherwise, the token metadata won't be found by the
getTokenMetadata method, and a
TokenMetadataNotFound error will be thrown.
For more information on the contracts used in the examples:
integration-tests/tzip12-token-metadata.spec.ts | https://tezostaquito.io/docs/11.0.2/tzip12/ | CC-MAIN-2022-05 | en | refinedweb |
Remove a link to a file
#include <unistd.h> int unlink( const char * path );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The unlink() function removes a link to a file:.
#include <unistd.h> #include <stdlib.h> int main( void ) { if( unlink( "vm.tmp" ) ) { puts( "Error removing vm.tmp!" ); return EXIT_FAILURE; } return EXIT_SUCCESS; }
POSIX 1003.1
chdir(), chmod(), close(), errno, getcwd(), link(), mkdir(), open(), pathmgr_symlink(), pathmgr_unlink(), remove(), rename(), rmdir(), stat(), symlink() | http://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/u/unlink.html | CC-MAIN-2022-05 | en | refinedweb |
Looking for the top Go frameworks for the web? You came to the right place.
Go is a multiparadigm, statically-typed, and compiled programming language designed by Google. It is similar to C, and if you’re a fan of C, this can be an easy language to pick up. Many developers have embraced this language because of its garbage collection, memory safety, and structural typing system.
According to the 2020 Stack Overflow developer survey, Go is now considered the fifth most “loved” language on the site and the third most “wanted” language for developers to learn who do not know Go yet.
Go is mostly used for web applications, which is why we will look at the top five Go web frameworks and their features to see which is best for your own development.
In this post, we’ll review the reasons to use Go, the pros and cons of using Go frameworks, and five current top Go frameworks, including:
And now, let’s get into it.
Why use Go?
Before reviewing five top Go frameworks, what is Go truly used for? Aside from building general web applications, the language’s scope encompasses a wide range of use cases:
- Command line application
- Cloud-native development
- Creating utilities and stand-alone libraries
- Developing databases, such as CockroachDB
- Game development
- Development operations
Go web frameworks were created to ease Go web development processes without worrying about setups and focusing more on the functionalities of a project.
Using Go without a framework is possible, however, it is much more tedious and developers must constantly rewrite code. This is where the web frameworks come in.
With frameworks, for instance, instead of writing a wrapper around a database connection in every new project, developers can just pick a favorite framework and focus more on the business logic.
Pros of using Go web frameworks
Before we look into five top Go web frameworks, let’s review a few reasons why Go is popular.
Static typing
Static typing provides better performance at runtime because it’s mostly used to build high-performance applications that are highly optimized at compile times.
Static typing also finds hidden problems like type errors. For example, if I need to create an integer variable, the compiler now notes it is an integer and only accepts an integer. This makes it easier to manage code for larger projects.
Available packages
A lot of developers have created production-ready packages on top of Go standard packages. These packages often become the standard libraries for specific features. For example, Gorilla Mux was created for routing by the community because the initial Go router is quite limited.
All Go-related packages are available on Github, such as MongoDB, Redis, and MySQL.
Fast development
Development time for these frameworks is fast and simple. Packages are already available and can import easily, eliminating the need to write redundant code, which is a win for developers.
Built-in concurrency
Go’s Goroutines, which provide simple concurrency, provide language-level support for concurrency, lightweight threads, strict rules for avoiding mutation to disallow race conditions, and overall simplicity.
Cons of using Go frameworks
The only true con to be aware of when using Go frameworks is error handling. It is still difficult to handle errors in Go because it is cumbersome and noisy by returning errors with responses and its strict type makes it harder to write.
5 top Go frameworks
Gin
Gin is an HTTP web framework written in Go that is immensely popular with over 50k stars on Github at the time of posting.
Currently, it is most popular for building microservices because it allows a simple way to build a request-handling pipeline where you can plug in middlewares.
It also boasts of a Martini-like API and, according to Gin’s GitHub page, is 40x faster because of its httprouter. Below are some of its amazing features.
Gin general features
Error management
Gin offers convenient error management. This means when encountering any errors during an HTTP request, Gin documents the errors as they occur:
c.AbortWithStatusJSON(400, gin.H{ "error": "Blah blahhh" }) // continue c.JSON(200, gin.H{ "msg": "ok" })
Creating middleware
It’s also incredibly easy to create middleware, which can be plugged into the request pipeline by creating a router with
r := gin.New() and adding a logger middleware with
r.Use(gin.Logger()).
You can also use a recovery middleware with
r.Use(gin.Recovery()).
Gin’s performance
Gin’s performance is thanks to its route grouping and small memory. Gin’s grouping ability for routes lets routes in Gin nest infinitely without it affecting performance.
Its fast performance is also thanks to its small memory, which Gin uses or references while running. The more memory usage the server consumes, the slower it gets. Since Gin has a low memory footprint, it provides faster performance.
JSON validation
Finally, Gin provides support for JSON validation. Requesting with a JSON can validate required values, like input data from the client. These values must be validated before saving in memory, so by validating them, developers can avoid saving inaccurate values.
Gin is a simple, easy-to-use framework that, if you are just starting to use Golang, Gin has been voted the most ideal framework because it is minimal and straightforward to use.
For a jumpstart tutorial, you can check this link.
Beego
Beego is another Go web framework that is mostly used to build enterprise web applications with rapid development.
Beego has four main parts that make it a viable Go framework:
- Base modules, which contain
log,
config, and
governor
- A webserver
- Tasks, which work similarly to cron jobs
- A client, which houses the ORM, httplib, and cache modules
Beego general features
Supports enterprise applications
Because Beego focuses on enterprise applications, which tend to be very large with a lot of code powering a lot of features, a modular structure arranges modules for specific use cases, optimizing performance.
So, the framework provides a great modular structure for things like a configuration module, logging module, and caching module.
It also uses a regular MVC architecture to handle specific development aspects in an app, which is also beneficial for enterprise applications.
Supports namespace routing
Namespace routing is supported by Beego as well, which defines where the
Controller is located for a
Route. Below is an example:
func init() { ns := beego.NewNamespace("/v1", beego.NSRouter("/auth", &controllers.AuthController{}), beego.NSRouter("/scheduler/task",&controllers.TaskController{}), ) beego.AddNamespace(ns) }
Beego’s automated API documentation through Swagger provides developers the automation they need to create API documentation without wasting time manually creating it.
Route annotation lets developers define any component for a route target for a given URL. This means routes do not need to be registered in the route file again; only the controller should use
Include.
With the following route annotation, Beego parses and turns them into routes automatically:
// Weather API type WeatherController struct { web.Controller } func (c *WeatherController) URLMapping() { c.Mapping("StaticBlock", c.StaticBlock) c.Mapping("AllBlock", c.AllBlock) } // @router /staticblock/:key [get] func (this *WeatherController) StaticBlock() { } // @router /all/:key [get] func (this *WeatherController) AllBlock() { }
Then, register the
Controller:
web.Include(&WeatherController{})
Iris
Iris is an Express.js-equivalent web framework that is easier to use for people coming from the Node.js community.
It comes with
Sessions, API versioning, WebSocket, dependency injection, WebAssembly, the typical MVC architecture, and more, making it very flexible with third-party libraries.
With over 20k stars on GitHub, Iris is most loved because of its simplicity and the ability to extend the framework with personal libraries quickly.
Iris general features
As discussed, one of Iris’s main features is that its fully accordant and flexible with external libraries, letting users pick and choose what they want to use with the framework
With a built-in logger for printing and logging server requests, users don’t need to use something external, cutting down the complexity of using Iris.
Like Beego, it provides MVC support for larger applications and its automated API versioning makes adding new integrations convenient by placing them in newer versions of the route.
Iris’s smart and fast compression provides faster performance, and testing is a breeze with the Ngrok integration, which lets developers share a local webserver with other developers for testing.
One great thing about this specific framework is that the author replies to issues on GitHub quickly, making it helpful when running into bugs.
Echo
Echo is another promising framework created by Labstack with 20k stars on GitHub. It is also regarded as a micro framework, which is more of a standard library and a router, and has fully-baked documentation for developers to use and understand.
This framework is great for people who want to learn how to create APIs from scratch, thanks to its extensive documentation.
Echo general features
Echo lets developers define their own middleware and has built-in middleware to use as well. This gives developers the ability to create custom middleware to get specific functionalities while having the built-in middleware speeds production.
Echo also supports HTTP/2 for faster performance and an overall better user experience.
Its API also supports a variety of HTTP responses like JSON, XML, stream, blob, file, attachment, inline, and customized central HTTP error handling.
Finally, it supports a variety of templating engines, providing the flexibility and convenience developers need when choosing an engine.
Fiber
Fiber is another Express.js-like web framework written in Go that boasts low memory usage and rich routing. Built on top of the Fasthttp HTTP engine for Go, which is the fastest HTTP engine for Go, Fiber provides one of the fastest Go frameworks.
Created with the main focus of minimalism and the Unix philosophy to provide simple and modular software technology, the idea for Fiber was to allow new Go developers to begin creating web applications quickly.
Fiber general features
Fiber boasts of a built-in rate limiter that helps reduce traffic to a particular endpoint. For example, this is helpful if a user tries to sign in to an account continuously and knowing that it might be malicious activity.
Its static files, like style sheets, scripts, and images, can be handled and served from the server, making them easily cached, consuming less memory, and the content remains static upon every request.
And its support for WebSocket bidirectional TCP connections is useful for creating real-time communications, like a chat system.
Like the other Go frameworks we’ve mentioned in this post, it has versatile middleware support, supports a variety of template engines, has low memory usage and footprint, and provides great documentation that is easy and clear for new users.
Conclusion
In this article, we looked at five top Go web frameworks. This list does not mean these are the best or indicates which you should choose. Your favorite might not be on the list but that does not stop it from being a better framework for you depending on your use case.
While some of these frameworks are inspired by others, some are built to cover areas others didn’t, and most of them have similar features, the best framework depends on your use case, so pick the framework that’s best for your Go. | https://blog.logrocket.com/5-top-go-web-frameworks/ | CC-MAIN-2022-05 | en | refinedweb |
Get the name of a slave pseudo-terminal device
#include <stdlib.h> char *ptsname( int fildes );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The ptsname() function gets the name of the slave pseudo-terminal device associated with a master pseudo-terminal device.
The ptsname_r() function is a QNX Neutrino function that's a reentrant version of ptsname().
A pointer to a string containing the pathname of the corresponding slave device, or NULL if an error occurred (e.g. fildes is an invalid file descriptor, or the slave device name doesn't exist in the filesystem). | https://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/p/ptsname.html | CC-MAIN-2022-05 | en | refinedweb |
Provide advisory information about the expected use of memory
#include <sys/mman.h> int posix_madvise( void *addr, size_t len, int advice );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The posix_madvise() function advises the memory manager how the application expects to use the data in the memory starting at address addr, and continuing for len bytes. The memory manager may use this information to optimize handling of the data. | https://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/p/posix_madvise.html | CC-MAIN-2022-05 | en | refinedweb |
5.2. Configuring monitoring to use OpenShift Container Storage
OpenShift Container Storage provides a monitoring stack that comprises of Prometheus and Alert Manager.
Follow the instructions in this section to configure OpenShift Container Storage as storage for the monitoring stack.
Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring.
Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details.
Prerequisites
- You have administrative access to OpenShift Web Console.
- OpenShift Container Storage Operator is installed and running in the
openshift-storagenamespace. In the OpenShift Web Console, click Operators → Installed Operators to view installed operators.
- Monitoring Operator is installed and running in the
openshift-monitoringnamespace. In the OpenShift Web Console, click Administration → Cluster Settings → Cluster Operators to view cluster operators.
- A storage class with provisioner
openshift-storage.rbd.csi.ceph.comis available. In the OpenShift Web Console, click Storage → Storage Classes to view available storage classes.
Procedure
- In the OpenShift Web Console, go to Workloads → Config Maps.
- Set the Project dropdown to
openshift-monitoring.
- Click Create Config Map.
Define a new
cluster-monitoring-configConfig Map using the following example.
Replace the content in angle brackets (
<,
>) with your own values, for example,
retention: 24hor
storage: 40Gi.
Replace the storageClassName with the
storageclassthat uses the provisioner
openshift-storage.rbd.csi.ceph.com. In the example given below the name of the storageclass is
ocs-storagecluster-ceph-rbd.
Example
cluster-monitoring-configConfig Map
apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, e.g. 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>
- Click Create to save and create the Config Map.
Verification steps
Verify that the Persistent Volume Claims are bound to the pods.
- Go to Storage → Persistent Volume Claims.
- Set the Project dropdown to
openshift-monitoring.
Verify that 5 Persistent Volume Claims are visible with a state of
Bound, attached to three
alertmanager-main-*pods, and two
prometheus-k8s-*pods.
Monitoring storage created and bound
Verify that the new
alertmanager-main-*pods appear with a state of
Running.
- Go to Workloads → Pods.
- Click the new
alertmanager-main-*pods to view the pod details.
Scroll down to Volumes and verify that the volume has a Type,
ocs-alertmanager-claimthat matches one of your new Persistent Volume Claims, for example,
ocs-alertmanager-claim-alertmanager-main-0.
Persistent Volume Claims attached to
alertmanager-main-*pod
Verify that the new
prometheus-k8s-*pods appear with a state of
Running.
- Click the new
prometheus-k8s-*pods to view the pod details.
Scroll down to Volumes and verify that the volume has a Type,
ocs-prometheus-claimthat matches one of your new Persistent Volume Claims, for example,
ocs-prometheus-claim-prometheus-k8s-0.
Persistent Volume Claims attached to
prometheus-k8s-*pod | https://access.redhat.com/documentation/ja-jp/red_hat_openshift_container_storage/4.8/html/deploying_and_managing_openshift_container_storage_using_red_hat_openstack_platform/configuring-monitoring-to-use-openshift-container-storage_osp | CC-MAIN-2022-05 | en | refinedweb |
#include <CGAL/Tree_traits.h>
tree_point_traits is a template class that provides an interface to data items.
defines a comparison relation which must define a strict ordering of the objects of type
Key.
If defined,
less<Key> is sufficient.
the container
Data - defines the Data type.
It may consist of several data slots. One of these data slots has to be of type
Key.
the container
Window - defines the type of the query rectangle.
It may consist of several data slots. Two of these data slots has to be of type
Key | https://doc.cgal.org/5.0.2/SearchStructures/classCGAL_1_1tree__point__traits.html | CC-MAIN-2022-05 | en | refinedweb |
Hide Forgot
Version-Release number of selected component:
java-1.8.0-openjdk-devel-1.8.0.111-5.b16.fc25
Additional info:
reporter: libreport-2.8.0
backtrace_rating: 3
cmdline: /usr/lib/jvm/java-1.8.0/bin/java -Xmx1536m -Dfile.encoding=UTF-8 -Duser.country=DE -Duser.language=de -Duser.variant -cp /home/drindt/.gradle/wrapper/dists/gradle-2.14.1-all/8bnwg5hd3w55iofp58khbp6yv/gradle-2.14.1/lib/gradle-launcher-2.14.1.jar org.gradle.launcher.daemon.bootstrap.GradleDaemon 2.14.1
crash_function: crash_handler
executable: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-5.b16.fc25.x86_64/bin/java
global_pid: 6394
kernel: 4.9.5-200.fc25.x86_64
pkg_fingerprint: 4089 D8F2 FDB1 9C98
pkg_vendor: Fedora Project
runlevel: N 5
type: CCpp
uid: 1000
Truncated backtrace:
Thread no. 0 (10 frames)
#4 crash_handler at /usr/src/debug/java-1.8.0-openjdk-1.8.0.111-5.b16.fc25.x86_64/openjdk/hotspot/src/os/linux/vm/vmError_linux.cpp:106
#6 Klass::decode_klass_not_null at /usr/src/debug/java-1.8.0-openjdk-1.8.0.111-5.b16.fc25.x86_64/openjdk/hotspot/src/share/vm/oops/klass.inline.hpp:65
#7 oopDesc::klass at /usr/src/debug/java-1.8.0-openjdk-1.8.0.111-5.b16.fc25.x86_64/openjdk/hotspot/src/share/vm/oops/oop.inline.hpp:77
#8 oopDesc::size at /usr/src/debug/java-1.8.0-openjdk-1.8.0.111-5.b16.fc25.x86_64/openjdk/hotspot/src/share/vm/oops/oop.inline.hpp:613
#9 ObjectStartArray::object_start at /usr/src/debug/java-1.8.0-openjdk-1.8.0.111-5.b16.fc25.x86_64/openjdk/hotspot/src/share/vm/gc_implementation/parallelScavenge/objectStartArray.hpp:150
#10 ParallelScavengeHeap::block_start at /usr/src/debug/java-1.8.0-openjdk-1.8.0.111-5.b16.fc25.x86_64/openjdk/hotspot/src/share/vm/gc_implementation/parallelScavenge/parallelScavengeHeap.cpp:557
#11 os::print_location at /usr/src/debug/java-1.8.0-openjdk-1.8.0.111-5.b16.fc25.x86_64/openjdk/hotspot/src/share/vm/runtime/os.cpp:939
#12 os::print_register_info at /usr/src/debug/java-1.8.0-openjdk-1.8.0.111-5.b16.fc25.x86_64/openjdk/hotspot/src/os_cpu/linux_x86/vm/os_linux_x86.cpp:858
#13 VMError::report at /usr/src/debug/java-1.8.0-openjdk-1.8.0.111-5.b16.fc25.x86_64/openjdk/hotspot/src/share/vm/utilities/vmError.cpp:542
#16 signalHandler at /usr/src/debug/java-1.8.0-openjdk-1.8.0.111-5.b16.fc25.x86_64/openjdk/hotspot/src/os/linux/vm/os_linux.cpp:4233
Created attachment 1244655 [details]
File: backtrace
Created attachment 1244656 [details]
File: cgroup
Created attachment 1244657 [details]
File: core_backtrace
Created attachment 1244658 [details]
File: dso_list
Created attachment 1244659 [details]
File: environ
Created attachment 1244660 [details]
File: limits
Created attachment 1244661 [details]
File: maps
Created attachment 1244662 [details]
File: mountinfo
Created attachment 1244663 [details]
File: namespaces
Created attachment 1244665 [details]
File: open_fds
Created attachment 1244666 [details]
File: proc_pid_status
Created attachment 1244667 . | https://bugzilla.redhat.com/show_bug.cgi?id=1416698 | CC-MAIN-2022-05 | en | refinedweb |
Answered by:
Calling WCF with BasicHttp using JQuery
Question
Hi,
I am trying to call a WCF service method exposed with basicHttp binding using Jquery. I am getting "Bad Request" error. I can sucessfully call the method if I expose it as REST service(webHttp binding).Could anyone help me
The code is as following:
var theRequest = " \
<s:Envelope xmlns:s=\"\"> \
<s:Header> \
<Action s:mustUnderstand=\"1\" xmlns=\"\"></Action> \
</s:Header> \
<s:Body> \
<GetData xmlns=\"\"> \
<value>8</value> \
</s:Body> \
</s:Envelope>"
;
$(document).ready(
function
() {
$(
"#btnWCFBasicHttp").click(function
() {
$.ajax({
type:
,
url:
""
,
data: theRequest,
timeout: 10000,
contentType:
"text/xml"
,
dataType:
"xml"
,
beforeSend:
function
(xhr) {
xhr.setRequestHeader(
"SOAPAction", ""
);
},
async:
false
,
success:
function
(data) {
alert(data)
},
error:
function
(xhr, status, error) {
alert(error);
}
});
});
});
Also, how I can call WCF service method using JQuery which has been exposed using
"wsHttp" binding? Is it possible ?
Thanks in advance.
ronit_rcFriday, December 30, 2011 11:25 AM
Answers
- Take a look at the code for this post at. The web.config has a <binding> element for wsHttpBinding which disables security.
Carlos FigueiraTuesday, January 3, 2012 7:38 PM
All replies
Yes, you can use jQuery to do that, but you'll need to format the request exactly as a WCF client would do. One easy way to verify that is to create one such a client (e.g. using svcutil to create a proxy), send a request using that proxy to the server and capture that using a tool such as fiddler. Then you'd capture what the jQuery page is sending, then compare with the one sent by the WCF client.
Below is an example of a basicHttpBinding service which can be called by jQuery.
Post_ca5941b4_bfcf_4546_a418_cc91a321c2fb.svc:
<%@ ServiceHost Language="C#" Debug="true" Service="Post_ca5941b4_bfcf_4546_a418_cc91a321c2fb.Service1" CodeBehind="Post_ca5941b4_bfcf_4546_a418_cc91a321c2fb.svc.cs" %>
Post_ca5941b4_bfcf_4546_a418_cc91a321c2fb.svc.cs:
namespace Post_ca5941b4_bfcf_4546_a418_cc91a321c2fb { [ServiceContract] public interface IService1 { [OperationContract] int Add(int x, int y); } public class Service1 : IService1 { public int Add(int x, int y) { return x + y; } } }
Web.config:
<system.serviceModel> <behaviors> <serviceBehaviors> <behavior name=""> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> <services> <service name="Post_ca5941b4_bfcf_4546_a418_cc91a321c2fb.Service1"> <endpoint address="" binding="basicHttpBinding" contract="Post_ca5941b4_bfcf_4546_a418_cc91a321c2fb.IService1"/> </service> </services> </system.serviceModel>
JavaScript code:
function Post_ca5941b4_bfcf_4546_a418_cc91a321c2fb() { var" + "<s:Body>" + "<Add xmlns=\"\">" + "<x>5</x><y>7</y>" + "</Add>" + "</s:Body>" + "</s:Envelope>"; $.ajax({ type: "POST", url: url, data: request, contentType: "text/xml", beforeSend: function (xhr) { xhr.setRequestHeader( "SOAPAction", ""); }, success: function (data, textStatus, jqXHR) { $("#result").text(jqXHR.responseText); }, error: function (jqXHR, textStatus, errorThrown) { $("#result").text("error"); } }); }
Carlos Figueira
Friday, December 30, 2011 3:44 PM
- Proposed as answer by CarlosFigueiraMicrosoft employee Monday, January 2, 2012 3:10 PM
Hi Carlos,
Thanks for the answer!
Actually I have taken the SOAP request from WCF Test client on calling the method. Could you please look into my code. I cannot understand what I am doing wrong.
Also, can I call a WCF service using JQuery which has been exposed with "wsHttp" binidng?
Regards
ronit_rcSaturday, December 31, 2011 1:55 PM
The code seems fine, but without having the complete picture (svc, config, service code) it's hard to tell. Try comparing what jQuery is sending to what WCF Test Client sends to the request - HTTP headers, body, and see if there is anything different.
And yes, it's possible to call a wsHttpBinding-endpoint as well. As far as the server is concerned, it's just a HTTP request coming over the wire, it doesn't matter whether it's coming from a "real" WCF client, jQuery, or someone with a plain socket connection. It's going to be harder to create the request, though, as wsHttp requires a lot more data to be sent (in terms of SOAP headers). And if the binding uses security, it's going to be even harder.
Also, try enabling tracing at the server side; it should tell why the server is rejecting your request and give more information about what can be changed.
Carlos Figueira
Saturday, December 31, 2011 3:57 PM
- Edited by CarlosFigueiraMicrosoft employee Saturday, December 31, 2011 4:03 PM
Hi Carlos,
Thanks for your response.
Finally I noticed the mistake. I missed the </GetData> element in the SOAP request. Now my JQuery code is gracefully calling the WCF sevice exposed with "basisHttp" binding.
Now, I am trying to call WCF sevice exposed with "wsBasisHttp" binding. I am getting (415 - Unsupported Media Type Error) error. Could you please help me on resolving this?
The SOAP request for the service is as following:
var whRequest =">" +
"</s:Header>" +
"<s:Body>" +
"<GetData xmlns=\"\">"+
"<value>9</value>"+
"</GetData>" +
"</s:Body>" +
"</s:Envelope>";
$(document).ready(function () {
$("#btnWCFWSHttp").click(function () {
$.ajax({
type: "POST",
url: "",
data: whRequest,
timeout: 10000,
contentType: "text/xml",
dataType: "xml",
beforeSend: function (xhr) {
xhr.setRequestHeader("SOAPAction", "");
},
async: false,
success: function (data) {
$(data).find("GetDataResponse").each(function () {
alert($(this).find("GetDataResult").text());
});
},
error: function (xhr, status, error) {
alert(error);
}
});
});
});
Regards
ronit_rc
Monday, January 2, 2012 9:37 AM
The SOAP request for a wsHttpBinding endpoint is different than the one for a basicHttpBinding - you'll need to again capture the request sent from a "real" WCF client (such as the WCF Test Client), then create a request which is similar to that one. The code below works for my service, but you'll need to see what needs to be sent to yours.
A few things I noticed are incorrect in your client code is that the content-type for SOAP 1.2 (which is what wsHttpBinding uses) is application/soap+xml, not text/xml (that's the actual cause of your 415 error); the SOAP header needs a "To" header with addressing information for the endpoint. Maybe there are more, you can check by comparing the two requests.
function Post_ca5941b4_bfcf_4546_a418_cc91a321c2fb_WS() { var" + "<s:Header>" + "<a:Action s:mustUnderstand=\"1\"></a:Action>" + "<a:MessageID>urn:uuid:9cc71731-811b-4baa-9db3-e6faa9a1c347</a:MessageID>" + "<a:ReplyTo><a:Address></a:Address></a:ReplyTo>" + "<a:To s:mustUnderstand=\"1\"></a:To>" + "</s:Header>" + "<s:Body>" + "<Add xmlns=\"\">" + "<x>44</x>" + "<y>55</y>" + "</Add>" + "</s:Body>" + "</s:Envelope>"; $.ajax({ type: "POST", url: url, data: request, contentType: "application/soap+xml", success: function (data, textStatus, jqXHR) { $("#result").text(jqXHR.responseText); }, error: function (jqXHR, textStatus, errorThrown) { $("#result").text("error"); } }); }
Carlos Figueira
Monday, January 2, 2012 3:10 PM
- Proposed as answer by CarlosFigueiraMicrosoft employee Monday, January 2, 2012 3:10 PM
Hi Calos,
Thanks again for your help.
I have rectified the content type to "application/soap/xml" and added the"To" header with addressing information for the endpoint. Now there is no (415 - Unsupported Media Type Error) error. But I am getting a "undefined" error with the following responsetext:
"<s:Envelope xmlns:<s:Header><a:Action s:</a:Action><a:RelatesTo>urn:uuid:7fdde7b6-64c8-4402-9af1-cc848f15888f</a:RelatesTo></s:Header><s:Body><s:Fault><s:Code><s:Value>s:Sender</s:Value><s:Subcode><s:Value xmlns:a:BadContextToken</s:Value></s:Subcode></s:Code><s:Reason><s:Text xml:lang="en-GB".</s:Text></s:Reason></s:Fault></s:Body></s:Envelope>"
There should not be any problem with the action, since using the same action I can call the service method exposed by "basicHttp" binding.
Modified SOAP Request
">" +
"<a:To s:mustUnderstand=\"1\"></a:To>" +
"</s:Header>" +
"<s:Body>" +
"<GetData xmlns=\"\">"+
"<value>9</value>"+
"</GetData>" +
"</s:Body>" +
"</s:Envelope>";
You are helping me a lot. Could you please help me in resolving the problem?
Regards
ronit_rcTuesday, January 3, 2012 5:27 AM
- As I mentioned, you need to compare what is sent to your service by the WCF test client. One thing which may be happening is that if the binding uses security (which is the default for wsHttpBinding), then this request will be rejected (you need to either disable security or to actually handle the handshake in the jQuery code itself, which will be really hard.
Carlos FigueiraTuesday, January 3, 2012 5:30 AM
Hi Carlos,
Thanks for the answer.
I have compared the requests once again but didn't found any difference.
Earlier you had provided an example where you sucessfully called the "wsHttp" binded WCF service in JQuery. Did you disabled security at WCF service end? Also is there any example where WCF security has been handled in JQuery?
Thanks in advance.
Regards
ronit_rcTuesday, January 3, 2012 7:17 AM
ronit_rc, there has to be a difference between the request you send with jQuery and the request you send with the WCF Test Client. Make sure you're talking to the same endpoint with the test client (same HTTP headers, address). As far as the WCF service is concerned, the requests are simply bytes coming over a TCP port, and it doesn't make any distinction between a "normal" WCF client or any other client (such as jQuery).
In my example, I did disable security at the WCF service, if you want to use that request I sent you'll need to do that as well. I don't think there is any example where WS-Security is implemented using jQuery, because it uses some encryption algorithms which AFAIK aren't available in javascript.
Carlos FigueiraTuesday, January 3, 2012 5:05 PM
Hi Calrlos,
I think, I should give a try by disabling security in WCF service. How I can disable security at the WCF service?
Regards
ronit_rcTuesday, January 3, 2012 6:42 PM
- Take a look at the code for this post at. The web.config has a <binding> element for wsHttpBinding which disables security.
Carlos FigueiraTuesday, January 3, 2012 7:38 PM
Hi Carlos,
Finally I succeeded to call the "wsHttp" binded WCF service method after disabling the security as service end. So, as I can understand, I have to compromise with security, if I want to call a "wsHttp" binded WCF service method in JQuery.
One thing I have noticed, a "wsHttp" response cannot be parsed with "jquery-1.4.1.min.js". It parses a blank responseXML and results a "parsererror" although there is valid "responseText". You need to use "jquery-1.5.1.min.js" onwards in oredr to parse "wsHttp" response.
Thanks a lot for the help and getting the things done.
Regards
ronit_rcWednesday, January 4, 2012 12:20 PM | https://social.msdn.microsoft.com/Forums/en-US/ca5941b4-bfcf-4546-a418-cc91a321c2fb/calling-wcf-with-basichttp-using-jquery?forum=wcf | CC-MAIN-2022-05 | en | refinedweb |
C# Compiler Error Message
CS0027 Keyword ‘this’ is not available in the current context
Reason for the Error
You will receive this error if you use this keyword outside of a property , function or constructor. For example, try compiling the below code snippet.
using System; public class DeveloperPublish { public int result = this.Function1(); public int Function1() { return 1; } public static void Main() { Console.WriteLine("Main"); } }
You will receive the error “CS0027 Keyword ‘this’ is not available in the current context” because this.Function1() is used in a wrong place.
Solution
To resolve this error, you’ll need to modify your code so that use of this keyword is moved inside a property, function or even a constructor. | https://developerpublish.com/c-error-cs0027-keyword-this-is-not-available-in-the-current-context/ | CC-MAIN-2022-05 | en | refinedweb |
Ticket #2024 (closed task: fixed)
Using CherryPy Filters to handle the same parameter on each controller method
Description
I was not able to find a proper section in the documentation pages, so I decided to open a ticket :/ You could consider this a recipe/howto. I hope it will make it into the official/rough docs. It took me quite some time to find the appropriate pointers/hints on how to work with filters....
Problem
You want to add the same GET/POST parameter to each controller method, but using **kwargs is cumbersome. Imagine you want to be able to change the "skin" variable on any controller.
Solution
Using CherryPy? filters you can intercept the request before it reaches the controller method and add/remove parameters from the request object. You could then, for example, put this request into the session.
So:
- Write a CherryPy? filter
- Append the filter to a controller. Note that the filter is also active for all controllers inside that controller!
The Filter class
from cherrypy.filters.basefilter import BaseFilter from cherrypy import request, session class SkinFilter(BaseFilter): """ If the request parameters contain a "skin" variable, pull it out the request and put it into the session """ __detected_skin = None def before_request_body(self): try: if "skin" in request.params: skin = request.params.pop("skin") if skin is not None: self.__detected_skin = skin except Exception, e: log.warning( "Error in %s:\n%s" % (self.__class__.__name__, str(e)) ) def before_main(self): if self.__detected_skin is not None: session['skin'] = self.__detected_skin
Adding it to the controller
... class Root(controllers.RootController): # attach cherrypy filters _cp_filters = [SkinFilter()] ...
The filter could also be attached to non-root controllers so only parts of you application handles those requests.
Change History
comment:3 Changed 10 years ago by Chris Arndt
- Status changed from new to closed
- Resolution set to fixed
Adde to the doc wiki as
Oh. I forgot: The reason why I pop the parameter from the request is to prevent CherryPy? from complaining if it's set to strictly check for parameters. | http://trac.turbogears.org/ticket/2024 | CC-MAIN-2019-22 | en | refinedweb |
Label:
Topic Links: of Sequenceproblem descriptionsequence is beautiful and the Beauty of a integer sequence is defined as Follows:re Moves all and the first element from every consecutive group of equivalent elements of the sequence (i.e. unique function In C + + STL) and the summation of rest integers is the beauty of the sequence.
Test Instructions Analysis:
The title is to ask for the sum of all the sub-orders, and the number of contiguous and equal numbers in a subsequence is not repeated (equivalent to a number).
Exercises
When you can't think of a problem, you can think in a different way.
First directly want to statistical results, but the obvious statistics is astronomical, so that there is no law, and did not want to come out, so I want to think from the opposite side of the question, since can not directly sum, then can not go to beg each point to the last ans contribution it. A little try, found feasible.
Another problem is that the adjacent same points in a subsequence are considered only once. You can choose to only calculate the contribution value of the first point in such a subsequence. That is, considering the contribution value of a point, just consider all the sub-sequences that contain it and have no point equal to it in front of it, we can find the number of such subsequence in a different angle, count all the sub-sequences that contain the modified node, and subtract the sub-sequences that do not meet the criteria (see code comment)
AC Code:
1#include <iostream>2#include <cstdio>3#include <map>4 using namespacestd;5typedefLong LongLL;6 7 Const intMAXN = 1e5 +Ten;8 Const intMoD = 1e9 +7;9 Ten intA[MAXN]; One intN; A - intBIN[MAXN]; -map<int,int>Mymap; the - voidtable () { -bin[0] =1; - for(inti =1; i < MAXN; i++) Bin[i] = bin[i-1] *2%MoD; + } - + voidinit () { A mymap.clear (); at } - - intMain () { - intTC; - table (); -scanf"%d", &TC); in while(tc--) { - init (); toscanf"%d", &n); +LL ans =0; - for(inti =1; I <= N; i++) { thescanf"%d", A +i); * //Mymap[a[i]] indicates the number of all sub-sequences ending with a[i] before I $ //Bin[n-1] denotes all sub-sequences that contain I, while Mymap[a[i]]*bin[n-i] represents a sub-sequence that does not meet the criteria. Panax NotoginsengLL tmp = ((Bin[n-1]-(LL) bin[n-i] * Mymap[a[i]])%mod +mod)%MoD; -Ans = (ans + a[i] * tmp)%MoD; theMymap[a[i]] + = bin[i-1]; +Mymap[a[i]]%=MoD; A } theprintf"%lld\n", ans); + } - return 0; $}View Code
HDU 5496 Beauty of Sequence | https://topic.alibabacloud.com/a/hdu-5496-beauty-of-sequence_8_8_31275750.html | CC-MAIN-2019-22 | en | refinedweb |
BusinessWorks™ filter
TIBCO BusinessWorks™
TIBCO Hawk® (2)
Apply TIBCO Hawk® filter
Category
Integration & API Management (7)
Apply Integration & API Management filter
Messaging & Events Processing (2)
Apply Messaging & Events Processing filter
(-)
Remove admin filter
admin
(-)
Remove ojdbc query filter
ojdbc query
BW6 (243)
Apply BW6 filter
BW5 (82)
Apply BW5 filter
BWCE (57)
Apply BWCE filter
37926 (22)
Apply 37926 filter
SOAP (15)
Apply SOAP filter
EMS (13)
Apply EMS filter
JMS (13)
Apply JMS filter
TIBCO Designer (13)
Apply TIBCO Designer filter
Docker (12)
Apply Docker filter
JDBC (11)
Apply JDBC filter
REST (11)
Apply REST filter
BusinessWorks Container Edition (10)
Apply BusinessWorks Container Edition filter
TEA 2.2 (9)
Apply TEA 2.2 filter
TIBCO Business Studio™ (9)
Apply TIBCO Business Studio™ filter
16456 (8)
Apply 16456 filter
Maven (8)
Apply Maven filter
Adapters (7)
Apply Adapters filter
LDAP (7)
Apply LDAP filter
plugin (7)
Apply plugin filter
TEA (7)
Apply TEA filter
Deployment (6)
Apply Deployment filter
FTL (6)
Apply FTL filter
hotfixes (6)
Apply hotfixes filter
API (5)
Apply API filter
Cloud Foundry (5)
Apply Cloud Foundry filter
Integration (5)
Apply Integration filter
java (5)
Apply java filter
Web Services (5)
Apply Web Services filter
ActiveDatabase Adapter (4)
Apply ActiveDatabase Adapter filter
AppManage (4)
Apply AppManage filter
BusinessWorks (4)
Apply BusinessWorks filter
bwagent (4)
Apply bwagent filter
FTL_HOME (4)
Apply FTL_HOME filter
Rendezvous (4)
Apply Rendezvous filter
REST API (4)
Apply REST API filter
rtview (4)
Apply rtview filter
SSL (4)
Apply SSL filter
WSDL (4)
Apply WSDL filter
XML (4)
Apply XML filter
6x (3)
Apply 6x filter
ADB (3)
Apply ADB filter
ADR3 (3)
Apply ADR3 filter
Documentation (3)
Apply Documentation filter
Domains (3)
Apply Domains filter
Fault-Tolerance (3)
Apply Fault-Tolerance filter
Map (3)
Apply Map filter
monitoring (3)
Apply monitoring filter
Oracle (3)
Apply Oracle filter
Pivotal (3)
Apply Pivotal filter
SAP (3)
Apply SAP filter
SharedModule (3)
Apply SharedModule filter
XPath (3)
Apply XPath filter
.NET (2)
Apply .NET filter
ActiveMQ (2)
Apply ActiveMQ filter
ApplicationProperties (2)
Apply ApplicationProperties filter
appnodes (2)
Apply appnodes filter
Authentication (2)
Apply Authentication filter
Automated Deployment (2)
Apply Automated Deployment filter
AWS (2)
Apply AWS filter
Azure (2)
Apply Azure filter
big data (2)
Apply big data filter
binding (2)
Apply binding filter
confidentiality plugin (2)
Apply confidentiality plugin filter
Connection Refused (2)
Apply Connection Refused filter
Container (2)
Apply Container filter
date (2)
Apply date filter
Deploy (2)
Apply Deploy filter
EAR (2)
Apply EAR filter
escape character (2)
Apply escape character filter
HawkAgent (2)
Apply HawkAgent filter
HDFS connection (2)
Apply HDFS connection filter
HTTP (2)
Apply HTTP filter
httpsconnection (2)
Apply httpsconnection filter
Installation (2)
Apply Installation filter
installation issue (2)
Apply installation issue filter
jar (2)
Apply jar filter
JDBC Update (2)
Apply JDBC Update filter
jenkins (2)
Apply jenkins filter
Message Acknowledge (2)
Apply Message Acknowledge filter
Microagent (2)
Apply Microagent filter
Module Properties (2)
Apply Module Properties filter
NoClassDefFoundError (2)
Apply NoClassDefFoundError filter
org.eclipse.core.runtime (2)
Apply org.eclipse.core.runtime filter
ParseDate (2)
Apply ParseDate filter
password (2)
Apply password filter
PCF (2)
Apply PCF filter
PDK (2)
Apply PDK filter
red hat (2)
Apply red hat filter
RestJson (2)
Apply RestJson filter
RVD (2)
Apply RVD filter
Security (2)
Apply Security filter
shared variables (2)
Apply shared variables filter
SOAP Header (2)
Apply SOAP Header filter
sql (2)
Apply sql filter
TERR™ (2)
Apply TERR™ filter
TIBCO eFTL™ (2)
Apply TIBCO eFTL™ filter
timer (2)
Apply timer filter
Training (2)
Apply Training filter
TRARuntimeAgent (2)
Apply TRARuntimeAgent filter
Windows10 (2)
Apply Windows10 filter
XSD (2)
Apply XSD filter
12996 (1)
Apply 12996 filter
activation error (1)
Apply activation error filter
Active/Passive (1)
Apply Active/Passive filter
ActiveMatrix (1)
Apply ActiveMatrix filter
activity boxes (1)
Apply activity boxes filter
alias file (1)
Apply alias file filter
APIX (1)
Apply APIX filter
application log (1)
Apply application log filter
application/soap+xml (1)
Apply application/soap+xml filter
appmanager (1)
Apply appmanager filter
as2 (1)
Apply as2 filter
auto number (1)
Apply auto number filter
automation (1)
Apply automation filter
base64 (1)
Apply base64 filter
batch (1)
Apply batch filter
Blank (1)
Apply Blank filter
BusinessStudio (1)
Apply BusinessStudio filter
BW-HTTP-100000 (1)
Apply BW-HTTP-100000 filter
BW-HTTP-100300 (1)
Apply BW-HTTP-100300 filter
BW-JDBC-100041 (1)
Apply BW-JDBC-100041 filter
BW-JDBC-100042 (1)
Apply BW-JDBC-100042 filter
BW-JDBC-100043 (1)
Apply BW-JDBC-100043 filter
bw.engine (1)
Apply bw.engine filter
BWCE2.4.1 (1)
Apply BWCE2.4.1 filter
BWCE_Mon (1)
Apply BWCE_Mon filter
BWPM (1)
Apply BWPM filter
BxException (1)
Apply BxException filter
caching (1)
Apply caching filter
certification (1)
Apply certification filter
CICD (1)
Apply CICD filter
CIS (1)
Apply CIS filter
cloud (1)
Apply cloud filter
Cloud Bus (1)
Apply Cloud Bus filter
code review (1)
Apply code review filter
code templates (1)
Apply code templates filter
Configuration (1)
Apply Configuration filter
Configuration Block (1)
Apply Configuration Block filter
configuration blocks (1)
Apply configuration blocks filter
Connection Pooling (1)
Apply Connection Pooling filter
connector (1)
Apply connector filter
Consul (1)
Apply Consul filter
content-type (1)
Apply content-type filter
continous integratiopn (1)
Apply continous integratiopn filter
continuous-delivery (1)
Apply continuous-delivery filter
conversion dates (1)
Apply conversion dates filter
current-dateTime (1)
Apply current-dateTime filter
Custom Data Source (1)
Apply Custom Data Source filter
customplugin (1)
Apply customplugin filter
data connection (1)
Apply data connection filter
database instances (1)
Apply database instances filter
databases (1)
Apply databases filter
Date Format (1)
Apply Date Format filter
date function (1)
Apply date function filter
Date/time (1)
Apply Date/time filter
de-identification (1)
Apply de-identification filter
decrypt (1)
Apply decrypt filter
DefaultValuesupdate (1)
Apply DefaultValuesupdate filter
Design Patterns (1)
Apply Design Patterns filter
development (1)
Apply development filter
development kit (1)
Apply development kit filter
DevOps (1)
Apply DevOps filter
docker volume (1)
Apply docker volume filter
drag (1)
Apply drag filter
drop (1)
Apply drop filter
EDI (1)
Apply EDI filter
Apply email filter
encryption (1)
Apply encryption filter
endpointURI (1)
Apply endpointURI filter
engine properties (1)
Apply engine properties filter
error (1)
Apply error filter
ESB (1)
Apply ESB filter
evaluate-xpath (1)
Apply evaluate-xpath filter
Export Data (1)
Apply Export Data filter
FailedLoginException (1)
Apply FailedLoginException filter
fb (1)
Apply fb filter
Fiddler (1)
Apply Fiddler filter
forecast (1)
Apply forecast filter
ftp (1)
Apply ftp filter
Google Distance API (1)
Apply Google Distance API filter
Group Mode (1)
Apply Group Mode filter
grouping (1)
Apply grouping filter
gzip (1)
Apply gzip filter
hadoop (1)
Apply hadoop filter
Hawk (1)
Apply Hawk filter
HDFS (1)
Apply HDFS filter
HDInsight (1)
Apply HDInsight filter
High-Availability (1)
Apply High-Availability filter
hipaa (1)
Apply hipaa filter
hmac (1)
Apply hmac filter
HTTP Palette (1)
Apply HTTP Palette filter
HttpClientErrorException (1)
Apply HttpClientErrorException filter
HttpInitClientException (1)
Apply HttpInitClientException filter
Hystrix (1)
Apply Hystrix filter
Identity Store (1)
Apply Identity Store filter
IIS (1)
Apply IIS filter
inbound (1)
Apply inbound filter
include (1)
Apply include filter
initializationerror (1)
Apply initializationerror filter
InterProcess Communication (1)
Apply InterProcess Communication filter
Invoke palette (1)
Apply Invoke palette filter
invoke partner (1)
Apply invoke partner filter
InvokeRESTAPI (1)
Apply InvokeRESTAPI filter
JavaInvoke (1)
Apply JavaInvoke filter
JDK Logging (1)
Apply JDK Logging filter
JMSpriority (1)
Apply JMSpriority filter
JMX (1)
Apply JMX filter
JSON (1)
Apply JSON filter
JUnit (1)
Apply JUnit filter
kafka (1)
Apply kafka filter
Kubernetes (1)
Apply Kubernetes filter
libjli.dylib (1)
Apply libjli.dylib filter
Library (1)
Apply Library filter
Lines & Curves (1)
Apply Lines & Curves filter
linux (1)
Apply linux filter
literal (1)
Apply literal filter
live demo (1)
Apply live demo filter
Load Balancing (1)
Apply Load Balancing filter
LockException (1)
Apply LockException filter
Log4j (1)
Apply Log4j filter
logback (1)
Apply logback filter
logging (1)
Apply logging filter
Logs (1)
Apply Logs filter
Managed Fault Tolerance (1)
Apply Managed Fault Tolerance filter
Mapper (1)
Apply Mapper filter
mapper activity (1)
Apply mapper activity filter
Maps (1)
Apply Maps filter
Mashery (1)
Apply Mashery filter
memory (1)
Apply memory filter
messaging (1)
Apply messaging filter
Missing data (1)
Apply Missing data filter
module (1)
Apply module filter
Mongo (1)
Apply Mongo filter
MQ Server (1)
Apply MQ Server filter
multiple databases (1)
Apply multiple databases filter
multiple edatabase instances (1)
Apply multiple edatabase instances filter
Multithreading (1)
Apply Multithreading filter
mutliple databases (1)
Apply mutliple databases filter
name-value (1)
Apply name-value filter
namespace (1)
Apply namespace filter
Notify activity (1)
Apply Notify activity filter
notify configuration (1)
Apply notify configuration filter
NT Service (1)
Apply NT Service filter
OAuth (1)
Apply OAuth filter
obfuscate (1)
Apply obfuscate filter
OpenShift (1)
Apply OpenShift filter
OracleDriver (1)
Apply OracleDriver filter
OSGi (1)
Apply OSGi filter
out of memory (1)
Apply out of memory filter
overwrite (1)
Apply overwrite filter
palette (1)
Apply palette filter
parallel execution (1)
Apply parallel execution filter
Permissions (1)
Apply Permissions filter
PortalBuilder (1)
Apply PortalBuilder filter
privileges (1)
Apply privileges filter
process documentation (1)
Apply process documentation filter
Productivity (1)
Apply Productivity filter
PVM-MODEL-102109 (1)
Apply PVM-MODEL-102109 filter
PVM-XML-106006 (1)
Apply PVM-XML-106006 filter
QueueConnectionFactory (1)
Apply QueueConnectionFactory filter
receive notification (1)
Apply receive notification filter
renderer (1)
Apply renderer filter
row level security (1)
Apply row level security filter
rvcache (1)
Apply rvcache filter
Salesforce (1)
Apply Salesforce filter
Samples (1)
Apply Samples filter
scale (1)
Apply scale filter
scheduling (1)
Apply scheduling filter
scripting (1)
Apply scripting filter
Security Policy (1)
Apply Security Policy filter
send mail (1)
Apply send mail filter
sensitive-data (1)
Apply sensitive-data filter
service (1)
Apply service filter
Service Discovery (1)
Apply Service Discovery filter
service instance (1)
Apply service instance filter
service pallette (1)
Apply service pallette filter
SharedProcess (1)
Apply SharedProcess filter
SharePoint (1)
Apply SharePoint filter
Skyway (1)
Apply Skyway filter
slider forecast (1)
Apply slider forecast filter
SOAP Header Attribute (1)
Apply SOAP Header Attribute filter
special character (1)
Apply special character filter
spell check (1)
Apply spell check filter
Spring Security Framework (1)
Apply Spring Security Framework filter
SQLDirect (1)
Apply SQLDirect filter
SQLException (1)
Apply SQLException filter
SSL Certification (1)
Apply SSL Certification filter
Standard Pattterns (1)
Apply Standard Pattterns filter
startup issue (1)
Apply startup issue filter
swagger (1)
Apply swagger filter
Swarm (1)
Apply Swarm filter
SWift (1)
Apply SWift filter
TCI (1)
Apply TCI filter
Testing Tools (1)
Apply Testing Tools filter
TIBCO-BW-PALETTE-FTL-500306 (1)
Apply TIBCO-BW-PALETTE-FTL-500306 filter
tibco6 (1)
Apply tibco6 filter
tibemsd (1)
Apply tibemsd filter
tibemsd.conf (1)
Apply tibemsd.conf filter
TID Manager (1)
Apply TID Manager filter
TLS (1)
Apply TLS filter
TLS/SSL (1)
Apply TLS/SSL filter
Topic (1)
Apply Topic filter
TRA5.10 (1)
Apply TRA5.10 filter
transformation (1)
Apply transformation filter
unable to acquire application service (1)
Apply unable to acquire application service filter
username (1)
Apply username filter
UserNameToken (1)
Apply UserNameToken filter
v5.7 (1)
Apply v5.7 filter
v5.x (1)
Apply v5.x filter
Wait (1)
Apply Wait filter
WebHDFS (1)
Apply WebHDFS filter
WINDOWS 2012 R2 (1)
Apply WINDOWS 2012 R2 filter
WSDLImportError (1)
Apply WSDLImportError filter
WSSConsumer (1)
Apply WSSConsumer filter
XML validation (1)
Apply XML validation filter
xml2json (1)
Apply xml2json filter
Ask a Question
Search Answers
Most Recent
Most Popular
Highest Rated
Ask a Question
Search Answers
1
vote
how to get row count from Tibco Direct SQL or JDBC Query activity
David Abragimov
answered on 2:15pm Apr 18, 2018
Answers
1 Answer
sql
ojdbc query
0
votes
Tibco BW/Admin
ugalekishor99_gmail
answered on 9:43am Oct 31, 2017
Answers
1 Answer
AppManage
admin
0
votes
I was trying to put a pattern text and after time and before time for onSearchLogFile method for TibcoRuntimeAgent microagent. I wanted to know the format used for the same for all the parameters.
tristanchouxa
answered on 8:09am Oct 18, 2017
Answers
1 Answer
admin
0
votes
Upgrade to Windows 10
maphosadx
answered on 2:00am Apr 25, 2016
Answers
1 Answer
admin
BW5
TRA5.10
BW5
0
votes
How enabled register hawk product in TEA for the hawk that in built with TEA
dwiz.kumar1988
answered on 6:33pm Apr 22, 2016
Answers
2 Answers
BW6
admin
TEA
0
votes
Not able to retrieve the WSDL from Administrator when the deployed connection use machine IP
mhussein27
posted on 10:30pm Feb 16, 2016
Answers
0 Answers
Web Services
admin
0
votes
Issue while configuring Hawk rules on Unix servers.
suman.chatterjee3
answered on 8:48pm Dec 13, 2015
Answers
1 Answer
admin
Haven't found what you are looking for?
Ask your own question of the community.
Ask a Question
TIBCO Cloud Only
ON
OFF
User Preferences
ON
OFF
Hide | https://community.tibco.com/answers/tags/ojdbc-query-21876/product/546/tags/admin-931 | CC-MAIN-2019-22 | en | refinedweb |
I’m working on an gauge type indicator using the 7" LCD and the EMX/Cobra. It will have two pointers swinging from the top center of the display and deflecting +/- 60 degrees. Ideally I would like a refresh rate of 5-10Hz.
I’ve been trying to get an idea of performance for the system using the code below.
With the Glide Window set to 800x480 and the image object set to 560x280 it takes approx 17 seconds to render and flush the image to the screen 30 times. This is much improved over the 35+seconds my previous code was taking, that was repainting the entire 800x480 area.
My question is: Is this the expected performance for the EMX and the 7" display or am I doing something totally wrong?
Thanks
Brian
using System.Threading; using Microsoft.SPOT; using GHIElectronics.NETMF.Glide; using GHIElectronics.NETMF.Glide.Display; using GHIElectronics.NETMF.Glide.UI; namespace GlideTest1 { public class Program { // This will hold the main window. static Window window; static long StartTick = 0; static long StopTick = 0; public static void Main() { // Add the following as a txt file Resource named "WindowX" // <Glide Version="1.0.4"> // <Window Name="window" Width="800" Height="480" BackColor="FFFFFF"> // <Image Name="image" X="120" Y="0" Width="560" Height="280" Alpha="255"/> // <Button Name="button" X="0" Y="0" Width="80" Height="32" Alpha="255" Text="Button" Font="4" FontColor="000000" DisabledFontColor="808080" TintColor="000000" TintAmount="0"/> // </Window> // </Glide> // Load the window, window = GlideLoader.LoadWindow(Resources.GetString(Resources.StringResources.WindowX)); // Initialize the image I will be drawing on Image image = (Image)window.GetChildByName("image"); //Start timing StartTick = System.DateTime.Now.Ticks; Glide.MainWindow = window; for (int i=0; i < 30; i++) { image.Bitmap.DrawEllipse(Colors.Blue, 20+i, 20+i, 50-i, 50-i); //also tried as a Rectangle to see if performance was impacted //image.Bitmap.DrawRectangle(Colors.Blue, 2, 20 + i, 20 + i, 50 - i, 50 - i, 0, 0, Colors.Red, 0, 0, Colors.Red,0,0,100); image.Render(); Glide.Flush(image); //Thread.Sleep(150); //desired refresh rate is 5-10Hz } //STOP TIMING StopTick = System.DateTime.Now.Ticks; //Print to debug how long the for loop operation took to complete. Debug.Print(((StopTick - StartTick) / 10000).ToString() + " mSec"); Thread.Sleep(-1); } } } | https://forums.ghielectronics.com/t/render-speed-on-emx-with-7/9211 | CC-MAIN-2019-22 | en | refinedweb |
In this Google flutter code example we are going to learn how to use Limited 'limitedLimitedBox(), ); } }
limitedbox.dart
import 'package:flutter/material.dart'; class BasicLimitedBox extends StatelessWidget { /. @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar(title: Text("LimitedBox Widget")), body: Center( child: LimitedBox( //here, our container widget has no particular height or width child: Container( color: Colors.red, ), //we use the maxWidth & maxHeight to limit it ), ), ); } }
If you have any questions or suggestions kindly use the comment box or you can contact us directly through our contact page below. | https://inducesmile.com/google-flutter/how-to-use-limitedbox-widget-in-flutter/ | CC-MAIN-2019-22 | en | refinedweb |
public class Message extends Object
Represents a message that can be sent via Firebase Cloud Messaging (FCM). Contains payload
information as well as the recipient information. The recipient information must contain exactly
one token, topic or condition parameter. Instances of this class are thread-safe and immutable.
Use
Message.Builder to create new instances.
See Also
Nested Class Summary
Public Method Summary
Inherited Method Summary
From class java.lang.Object
Public Methods
public static Message.Builder builder ()
Creates a new
Message.Builder.
Returns
- A
Message.Builderinstance. | https://firebase.google.com/docs/reference/admin/java/reference/com/google/firebase/messaging/Message?hl=tr | CC-MAIN-2019-22 | en | refinedweb |
A daemon for asynchronous job processing
Hades is an multithreaded asynchronous job processing class. It allows you to register different job types and associate processor functions or classes with those job types.
Job processor function needs to expect job data as input. If a job processor is a class, it needs to have run() method that expects job data as input.
Jobs are discovered by watching a folder on the file system for new files. If a file is correctly formated it is parsed and processed passing the parsed data to the registered processor.
A file needs to be a json document with a ‘type’ key matching one of the registered job types.
When the class is initialized it expects a path to the folder to watch. The folder needs to have three subfolders: ‘in’ for incoming jobs, ‘cur’ for storing successfully processed jobs (this can be disabled by passing save_successful=False to the class), and ‘err’ for storing failed jobs (this can be disabled by passing save_failed=False to the class).
By default a pool of 5 processing threads will be started. This can be tuned by passing threads=N to the class.
If a job processing fails, it will be reprocessed few more times as defined by retries attribute (default is 3). If this is not desired it can be disabled by passing retries=0 to the class.
The usage is very simple. You need to define processors, initialize Hades with the path to the folder to watch, register processors with Hades and call start() method to start all the threads. By default Hades runs in interactive mode. If you want to run it in a daemon mode, just pass ‘daemon=True’ to the start() method.
import hades class Download: def run(self, task): print("Downloading page: {0}".format(task['url'])) return True def send_email(task): print("Email for: {0}".format(task['rcpt'])) return True if __name__ == '__main__': hades = hades.Hades('/tmp/jobs') hades.register({'email': send_email}) hades.register({'download': Download}) hades.start(daemon=True)
To send jobs, just deserialize a json object containing the data for your jobs and save them into the defined folder. Hades will pick it up from there.
import json email = {'type': 'email', 'from': 'no@mail.com', 'rcpt': 'test@example.com', 'subject': 'Test email', 'body': 'Hi there!'} download = {'type': 'download', 'url': '', 'file': '/tmp/miljan.org.html'} json.dump(email, open("/tmp/jobs/in/email", "w")) json.dump(download, open("/tmp/jobs/in/download", "w"))
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/hades/ | CC-MAIN-2017-26 | en | refinedweb |
hi I'm new
I got some questions about bubble-sort, so I registered here.
I'm currently working on this program (bubble-sort) so I looked up some example codes and I found this here, which is practically the right thing I'm looking for.
But I have some questions, like
1. How would printf("%2d. Pass: ", i-1); be with cout, since I'm not really familiar with printf as I always worked with cout.
my guess would be cout<<("Pass: ", i-1); but then there is no space between each number
printf("%3d", z[k]); I think it should be cout<<z[k];
2. How would I program this when I want to type numbers myself instead of random numbers.
#include <iostream.h> #include <stdlib.h> #include <time.h> void bubble_sort(int n, int z[]) { int i,j,x,k; for (i=2; i<=n; i++) { for (j=n; j>=i; j--) if (z[j-1] > z[j]) { x = z[j-1]; z[j-1] = z[j]; z[j] = x; } printf("%2d. Pass: ", i-1); for (k=1; k<=10; k++) printf("%3d", z[k]); printf("\n"); } } int main() { int i,k,number[11]; srand(time(NULL)); for (i=1; i<=8;i++) number[i] = rand()%100; printf("------ before bubble_sort ------------------------------\n"); for (k=1; k<=8; k++) printf("%3d", number [k]); cout<<("\n\n"); bubble_sort(8, number); printf("------ after bubble_sort ------------------------------\n"); for (k=1; k<=8; k++) printf("%3d", number [k]); cout<<("\n"); }
oh and I get 02293616 or sth. like that on each pass, how can I remove it? | https://www.daniweb.com/programming/software-development/threads/396343/bubble-sort | CC-MAIN-2017-26 | en | refinedweb |
I am having a trouble calling my addsomethingup() function in my program. I keep getting the error too few arguments. Can anyone help me understand why this is not working please?
#include <iostream> #include <string> #include <iomanip> using namespace std; using std::cout; // //CLASS DECLARATION SECTION // class EmployeeClass { public: void ImplementCalculations(string EmployeeName, int hours, float wage); void DisplayEmployInformation(); void Addsomethingup(EmployeeClass, EmployeeClass, EmployeeClass); string EmployeeName; int hours; float wage; float basepay; int overtime_hours; float overtime_pay; float overtime_extra; float iTotal_salaries; float iIndividualSalary; int iTotal_hours; int iTotal_OvertimeHours; }; EmployeeClass Employ1; EmployeeClass Employ2; EmployeeClass Employ3; int main() { system("cls"); // display welcome information cout << "\nWelcome to the Employee Pay Center\n\n"; /* Use this section to define your objects. You will have one object per employee. You have only three employees. The format is your class name and your object name. */ /* Here you will prompt for the first employee’s information. Prompt the employee name, hours worked, and the hourly wage. For each piece of information, you will update the appropriate class member defined above. Example of Prompts Enter the employee name = Enter the hours worked = Enter his or her hourly wage = */ // Enter employees information cout << "Enter the first employee's name = "; cin >> Employ1.EmployeeName; cout << "\nEnter the hours worked = "; cin >> Employ1.hours; cout << "\nEnter his or her hourly wage = "; cin >> Employ1.wage; /* Here you will prompt for the second employee’s information. Prompt the employee name, hours worked, and the hourly wage. For each piece of information, you will update the appropriate class member defined above. Enter the employee name = Enter the hours worked = Enter his or her hourly wage = */ cout << "\nEnter the second employee's name = "; cin >> Employ2.EmployeeName; cout << "\nEnter the hours worked = "; cin >> Employ2.hours; cout << "\nEnter his or her hourly wage = "; cin >> Employ2.wage; /* Here you will prompt for the third employee’s information. Prompt the employee name, hours worked, and the hourly wage. For each piece of information, you will update the appropriate class member defined above. Enter the employee name = Enter the hours worked = Enter his or her hourly wage = */ cout << "\nEnter the third employee's name = "; cin >> Employ3.EmployeeName; cout << "\nEnter the hours worked = "; cin >> Employ3.hours; cout << "\nEnter his or her hourly wage = "; cin >> Employ3.wage; /* Here you will implement a function call to implement the employ calcuations for each object defined above. You will do this for each of the three employees or objects. The format for this step is the following: [(object name.function name(objectname.name, objectname.hours, objectname.wage)] ; */ cout << endl; Employ1.ImplementCalculations(Employ1.EmployeeName, Employ1.hours, Employ1.wage); Employ2.ImplementCalculations(Employ2.EmployeeName, Employ2.hours, Employ2.wage); Employ3.ImplementCalculations(Employ3.EmployeeName, Employ3.hours, Employ3.wage); } // return 0; void EmployeeClass::ImplementCalculations(string EmployeeName, int hours, float wage) { //Initialize overtime variables overtime_hours = 0; overtime_pay = 0.0; overtime_extra = 0.0; if (hours > 40)// calculate overtime if hours greater than 40 { basepay = (40 * wage); overtime_hours = hours - 40; overtime_pay = wage * 1.5; overtime_extra = overtime_hours * overtime_pay; iIndividualSalary = (overtime_extra + basepay); DisplayEmployInformation(); } // if (hours > 40) else { basepay = hours * wage; iIndividualSalary = overtime_pay + basepay; DisplayEmployInformation(); } } //End of Primary Function void EmployeeClass::DisplayEmployInformation() // This function displays all the employee output information. { cout << "\n\n"; cout << "Employee Name ......................... " << EmployeeName << endl; cout << "Base Pay .............................. " << basepay << endl; cout << "Hours in Overtime ..................... " << overtime_hours << endl; cout << "Overtime Pay Amount ................... " << overtime_hours * overtime_pay << endl; cout << "Total Pay ............................. " << iIndividualSalary << endl; Addsomethingup(Employ1,Employ2,Employ3); }// END OF Display Employee Information void EmployeeClass::Addsomethingup(EmployeeClass Employ1, EmployeeClass Employ2, EmployeeClass Employ3) { // Adds three objects of class Employee passed as // function arguments and saves them as the calling object's data member values. /* Add the total hours for objects 1, 2, and 3. Add the salaries for each object. Add the total overtime hours. */ iTotal_hours = Employ1.hours + Employ2.hours + Employ3.hours; iTotal_salaries = Employ1.iIndividualSalary + Employ2.iIndividualSalary + Employ3.iIndividualSalary; iTotal_OvertimeHours = Employ1.overtime_hours + Employ2.overtime_hours + Employ3.overtime_hours; /* Then display the information below. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% EMPLOYEE SUMMARY DATA%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% Total Employee Salaries ..... = 576.43 %%%% Total Employee Hours ........ = 108 %%%% Total Overtime Hours......... = 5 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% */ cout << endl; cout << "%%%%%%%%%%%%%%%% EMPLOYEE SUMMARY DATA %%%%%%%%%%%%%%%%" << endl; cout << endl; cout << "%%%% Total Employee Salaries ........ = " << iTotal_salaries << endl; cout << "%%%% Total Employee Hours ........... = " << iTotal_hours << endl; cout << "%%%% Total Overtime Hours ........... = " << iTotal_OvertimeHours << endl; cout << endl; // End of function }; | https://www.daniweb.com/programming/software-development/threads/477834/help-with-a-function-call | CC-MAIN-2017-26 | en | refinedweb |
Capital Budgeting Concepts
Primary Task Response: Within the Discussion Board area, write 500–600 words that respond to the following questions with your thoughts, ideas, and comments. This will be the foundation for future discussions by your classmates. Be substantive and clear, and use examples to reinforce your ideas.
Part 1
Summative Discussion Board
Review and reflect on the knowledge that.
Part 2:
- What is the capital budgeting, and what role does it play in long-term investment decisions?
- What are the basic capital budgeting models, and which ones are considered the most reliable and why?
- What is net present value (NPV), how is it calculated, and what is the basic premise of its decision rule?
- What is the internal rate of return (IRR), how is it calculated, and what is the basic premise of its decision rule?
- What is the modified internal rate of return (MIRR), how is it calculated, and what is the basic premise of its decision rule?
- How is the weighted average cost of capital (WACC) employed in capital budgeting decisions, and should it be used for all project regardless of the riskiness of a project?
Be sure to document your posts with in-text citations, credible sources, and properly listed | https://www.studypool.com/discuss/204157/capital-budgeting-concepts | CC-MAIN-2017-26 | en | refinedweb |
I wanted something closer to "Engineering notation".... I'd like to be able to make the width a parameter (instead of fixed at 9). Being able to go as low as 7 (or even 6) would be way cool.
this code should meet those requirements. at least, the mantissa width is fixed, the exponent width varies. it should be easy enough to modify this code to suit your needs, if you wish.
i used your test suite, too.
#!/usr/bin/perl
use strict;
use warnings;
$|++;
Main();
exit;
## adapted from code found at:
+ml
sub eng
{
my( $num, $digits )= @_;
## default to smallest number of digits allowing fixed width manti
+ssa (4)
$digits= defined $digits && 3 < $digits
? $digits
: 4;
my $neg;
if( 0 > $num )
{
$neg= 'true';
$num= -$num;
}
0 == $num and return sprintf '+%.*fe+%s' => $digits - 1, $num, 0;
my $exp= 0 != $num
## perl's log() is natural log, convert to common log
? int( log($num) / log(10) )
## short-circuit: can't do log(0)
: 0;
## tricky integer casting ahead...
$exp= 0 < $exp
? int( ( int( $exp / 3 ) ) * 3 )
: int( int( ( -$exp + 3 ) / 3 ) * -3 );
$num *= 10 ** -$exp;
if( 1000 <= $num )
{
$num /= 1000;
$exp += 3;
}
elsif( 100 <= $num )
{
$digits -= 2;
}
elsif( 10 <= $num )
{
$digits -= 1;
}
0 <= $exp
and $exp= '+' . $exp;
return ( $neg ? '-' : '+' )
. sprintf '%.*fe%s' => $digits - 1, $num, $exp;
}
sub Main
{
my $digits= 2;
for my $exp ( -101..-98, -11, -10..11, 98..101 )
{
for my $sign ( '', '-' )
{
my $num= 0 + ( $sign . "5.555555555e" . $exp );
printf "%-20s (%s)\n", $num, eng( $num, $digits );
}
}
for my $exp ( -10..11 )
{
for my $sign ( '', '-' )
{
my $num= 0 + ( $sign . "1e" . $exp );
printf "%-20s (%s)\n", $num, eng( $num, $digits );
printf "%-20s (%s)\n", 0, eng( 0, $digits )
if 1 == $num;
}
}
}
[download]
~Particle *accelerates*
In reply to Re: Display floating point numbers in compact, fixed-width format
by particle
in thread Display floating point numbers in compact, fixed-width format. | http://www.perlmonks.org/?parent=294424;node_id=3333 | CC-MAIN-2017-26 | en | refinedweb |
Can.
"Get & Set" are kind of informal terms used to describe functions that allow users of the class to access the class's private member variables for reading (Get) and writing (Set). Typically, you'd have something like this:
class MyClass{ public: const int Get() const{ return m_variable; } void Set( int newValue ){ m_variable = newValue; } private: int m_variable; };
If you wanted, you could have a Get and a Set for each member variable. Maybe there are some member variables that the user doesn't need access to though (they might be used internally in the class as counters or something).
This might look like a lot of work when you could just make the member variable
public and set it directly. However, this way you have more control. For example, maybe you want
m_variable to only be in the range 0 - 10, in which case, you'd just change the
Set function to check the number you give it before setting the variable:
void MyClass::Set( int newNumber ){ if ( newNumber < 10 ){ m_variable = ( newNumber > 0 ? newNumber : 0 ); } else{ m_variable = 10; } }
There's also some things about encapsulation, which may not mean much to you at the moment. Basically, you want to keep the actual data in the class (i.e. the member variables) as restricted as possible, but have nice functions for getting at it. This helps if you decide to reorganize the internals of the class because you can just change the way the functions are implemented, but users of the class needn't know that anything has happened at all.
So, say I have a class that stores lists of numbers and has a function that gives you the average of the numbers, we'll call it
ave() . I might have originally thought that it was going to be a good idea to store all the numbers and have another variable that held the average. Then, when the user calls
ave() just return the value of that variable (like the
Get() function above). One day, I might decide that this is a terrible way to implement my class and that I should get rid of the variable that stores the average and just calculate the average from scratch every time the
ave() function is called. If everything's properly encapsulated, then the user will not notice any difference, they won't have to change any of their code at all. All the places where they call
ave() will still get the average that they're expecting. If You'd just exposed the member variable, then they'd be in all kinds of trouble when you suddenly decided to get rid of it because all their code that uses the class will have to be altered. Not good.
I hope that all makes some sense! :o)
Edited by ravenous: n/a
These functions stem from the guideline that data members should be kept private to avoid losing control over your data. For example:
#include <iostream> #include <string> class CellPhone { public: std::string number; CellPhone(const std::string& number): number(number) {} }; int main() { CellPhone myPhone("555-123-4567"); std::cout<< myPhone.number <<'\n'; myPhone.number = "Haha, this isn't a phone number"; std::cout<< myPhone.number <<'\n'; }
All hell can break loose when you expose a class' state for any use, so the usual advice is to make such data members private:
class CellPhone { private: std::string number; public: CellPhone(const std::string& number): number(number) {} };
Now the issue becomes accessing the data at all. In making number private, it becomes completely inaccessible to the outside, which may not be what's wanted. Perhaps we want the outside world to see it but not change it. This could be a use for a getter:
class CellPhone { private: std::string number; public: CellPhone(const std::string& number): number(number) {} std::string Number() const { return number; } }; int main() { CellPhone myPhone("555-123-4567"); std::cout<< myPhone.Number() <<'\n'; }
The data member is exposed, but read-only. You might also want to support modification, but still want to maintain enough control to avoid chaos. For example, validating the new phone number. This is the purpose of a setter:
class CellPhone { private: std::string number; public: CellPhone(const std::string& number): number(number) {} std::string Number() const { return number; } void Number(const std::string& newNumber) { if (!ValidPhoneNumber(newNumber)) throw std::invalid_argument("Invalid phone number"); number = newNumber; } }; int main() { CellPhone myPhone("555-123-4567"); std::cout<< myPhone.Number() <<'\n'; try { myPhone.Number("Haha, this isn't a phone number"); } catch (const std::invalid_argument& ex) { std::cerr<< ex.what() <<'\n'; } std::cout<< myPhone.Number() <<'\n'; }
It's all about controlled access to an object's state.
Well I thought they were preset functions in a library. So set is for inputting into the member from either the user or set by the programmer and the get is to input it to the user??
Thanks for the info. I understand most of it now though
As said, the setter functions are intended to modify class member variables in a specific instance of an object of the class. Conversely, getter functions are intended to access the values of those data members. This allows you to specify that the data members are private, keeping outside code from modifying those values without knowing any limitations that the class code may enforce upon the data, such as range checking for numeric values and such. As Narue indicated, allowing others to modify data members of your class directly could easily be a "bad thing".
Typically it works like this:
class Account { private: double m_LastDepositTime; // Julian date+time value double m_LastDepositAmt; // Amount of last deposit Account& operator=(const Account&); // Assignment not allowed public: Account() : m_LastDepoitTime(0.0), m_LastDepositAmt(0.0) {} ~Account() {} // Setter function - this could be called makeDeposit(double,double) for better // readability of code. void setLastDeposit( double dateTime, double amount ); // Getter functions double getLastDepositAmt() const { return m_LastDepositAmt; } double getLastDepositTime() const { return m_LastDepositTime; } };
Thanks. Now I think I totally get it. Just to figure out how to assign members now, but that's another thread I guess.. ... | https://www.daniweb.com/programming/software-development/threads/359584/set-get-functions | CC-MAIN-2017-26 | en | refinedweb |
I was experimenting with some datetime code in python so I could do a time.sleep like function on one piece of code, while letting other code run.
Here is my code:
import datetime, pygame
pygame.init()
secondtick = pygame.time.Clock()
timestr = str(datetime.datetime.now().time())
timelist = list(timestr)
timex = 1
while timex <= 6:
timelist.pop(0)
timex += 1
timex2 = 1
while timex2 <= 7:
timelist.pop(2)
timex2 += 1
secondstr1 = str("".join(timelist))
while 1:
timestr = str(datetime.datetime.now().time())
timelist = list(timestr)
timex = 1
while timex <= 6:
timelist.pop(0)
timex += 1
timex2 = 1
while timex2 <= 7:
timelist.pop(2)
timex2 += 1
secondstr2 = str("".join(timelist))
x = 1
if int(secondstr2) - x == int(secondstr1):
print(x)
x += 1
C:\Python32\python.exe "C:/Users/depia/PycharmProjects/testing/test.py"
Traceback (most recent call last):
File "C:/Users/depia/PycharmProjects/testing/test.py", line 31, in <module>
timelist.pop(2)
IndexError: pop index out of range
Process finished with exit code 1
-- code --
time.sleep(1)
secondstr2 = str("".join(timelist))
-- more code --
C:\Python32\python.exe "C:/Users/depia/PycharmProjects/testing/test.py"
1
You can use pygame.time.get_ticks to control time.
To delay your event for 1 second (1000ms) first you set
end_time = pygame.time.get_ticks() + 1000
and later in loop you check
if pygame.time.get_ticks() >= end_time: do_something()
or even (to repeat it every 1000ms)
if pygame.time.get_ticks() >= end_time: do_something() end_time = pygame.time.get_ticks() + 1000 #end_time = end_time + 1000
It is very popular method to control times for many different elements in the same loop.
Or you can use pygame.time.set_timer() to create own event every 1000ms.
First set
pygame.time.set_timer(pygame.USEREVENT+1, 1000)
and later in loop you check
for event in pygame.event.get(): if event.type == pygame.USEREVENT+1: do_something()
It is useful if you have to do something periodicaly
BTW: use
0 to turn off event
pygame.time.set_timer(pygame.USEREVENT+1, 0) | https://codedump.io/share/aJnwFNXFR1Jf/1/pygame-datetime-troubles | CC-MAIN-2017-26 | en | refinedweb |
0
Hello, I am new to Java programming and I just got the NetBeans IDE. One problem that I've been having with Java is the input.
package learning; public class Main { public static void main(String[] args) { int num1 = 0, num2 = 0; cin >> num1 >> num2; // How do I do this in Java?? System.out.println((num1+num2)); } }
How do I do that in Java? In C++, its too easy. Mainly you never need to use anything other than cin and getline... What do I use in Java??
Thanks | https://www.daniweb.com/programming/software-development/threads/212437/reading-input-from-the-keyboard | CC-MAIN-2017-26 | en | refinedweb |
Hi there
I wanna This is a "test". C code
Thanks alot
Test what, our patience? Seriously, even if anyone here were inclined to simply provide code to you - which we are not - you would need to give a lot more detail about what you need.
One of the forum rules states that, when posting for help with a program, you must demonstrate due diligence - you need to show that you have made a good-faith effort to solve the problem yourself, and that you have reached an impasse you can't reslove. In other words, show us your code. If you don't have any code to show us, tell us what you need help in doing. Make an effort first, before asking for a hand-out.
Hi there
No.
I'll write that program or you, after I see you have depposited $1,000.00 USD in my PayPal account.
Edited 4 Years Ago
by Ancient Dragon
#include <stdio.h>
main()
{
printf("this is a %s",""test"");
return0;
}
The main purpose is to show "test" in output
is this true?
The main purpose is to show "test" in output
The main purpose is to show "test" in output
Presumably the purpose is to show 'this is a "test"' as the output. Unfortunately, the code is wrong even if you exclude the syntax error in your return statement. You must escape double quotes in a string:
printf("this is a %s", "\"test\"");
string sentence="This is a test";
printf("%s",&sentence);
//maybe these will. ... | https://www.daniweb.com/programming/software-development/threads/452528/c-code-request | CC-MAIN-2017-26 | en | refinedweb |
One of the big new features in perl 5.8 is that we now have real working threads available to us through the threads pragma.
However, for us module authors who already have to support our modules on different versions of perl and different platforms, we now have to deal with another case: threads! This article will show you how threads relate to modules, how we can take old modules and make them thread-safe, and round off with a new module that alters perl's behavior of the "current working directory".
To run the examples I have shown here, you need perl 5.8 RC1 or later
compiled with threads. On Unix, you can use
Configure -Duseithreads -Dusethreads; On Win32, the default build will
always have threading enabled.
How do threads relate to modules?
Threading in Perl is based on the notion of explicit shared data. That
is, only data that is explicitly requested to be shared will be shared
between threads. This is controlled by the
threads::shared pragma and
the "
: shared" attribute. Witness how it works:
use threads; my $var = 1; threads->create(sub { $var++ })->join(); print $var;
If you are accustomed to threading in most other languages, (Java/C) you would expect $var to contain a 2 and the result of this script to be "2". However since Perl does not share data between threads, $var is copied in the thread and only incremented in the thread. The original value in the main thread is not changed, so the output is "1".
However if we add in
threads::shared and a
: shared attribute we get
the desired result:
use threads; use threads::shared; my $var : shared = 1; threads->create(sub { $var++ })->join(); print $var
Now the result will be "2", since we declared $var to be a shared variable. Perl will then act on the same variable and provide automatic locking to keep the variable out of trouble.
This makes it quite a bit simpler for us module developers to make sure our modules are thread-safe. Essentially, all pure Perl modules are thread-safe because any global state data, which is usually what gives you thread-safety problems, is by default local to each thread.
Definition of thread-safe levels
To define what we mean by thread-safety, here are some terms adapted from the Solaris thread-safety levels.
- thread-safe
- This module can safely be used from multiple threads. The effect of calling into a safe module is that the results are valid even when called by multiple threads. However, thread-safe modules can still have global consequences; for example, sending or reading data from a socket affects all threads that are working with that socket. The application has the responsibility to act sane with regards to threads. If one thread creates a file with the name file.tmp then another file which tries to create it will fail; this is not the fault of the module.
- thread-friendly
- Thread-friendly modules are thread-safe modules that know about and provide special functions for working with threads or utilize threads by themselves. A typical example of this is the core
threads::queuemodule. One could also imagine a thread-friendly module with a cache to declare that cache to be shared between threads to make hits more likely and save memory.
- thread-unsafe
- This module can not safely be used from different threads; it is up to the application to synchronize access to the library and make sure it works with it the way it is specified. Typical examples here are XS modules that utilize external unsafe libraries that might only allow one thread to execute them.
Since Perl only shares when asked to, most pure Perl code probably falls
into the thread-safe category, that doesn't mean you should trust it
until you have review the source code or they have been marked with
thread-safe by the author. Typical problems include using alarm(),
mucking around with signals, working with relative paths and depending
on
%ENV. However remember that ALL XS modules that don't state
anything fall into the definitive thread-unsafe category.
Why should I bother making my module thread-safe or thread-friendly?
Well, it usually isn't much work and it will make the users of this modules that want to use it in a threaded environment very happy. What? Threaded Perl environments aren't that common you say? Wait until Apache 2.0 and mod_perl 2.0 becomes available. One big change is that Apache 2.0 can run in threaded mode and then mod_perl will have to be run in threaded mode; this can be a huge performance gain on some operating systems. So if you want your modules to work with mod_perl 2.0, taking a look at thread-safety levels is a good thing to do.
So what do I do to make my module thread-friendly?
A good example of a module that needed a little modification to work
with threads is Michael Schwern's most excellent
Test::Simple suite
(
Test::Simple,
Test::More and
Test::Builder). Surprisingly, we
had to change very little to fix it.
The problem was simply that the test numbering was not shared between threads.
For example
use threads; use Test::Simple tests => 3; ok(1); threads->create(sub { ok(1) })->join(); ok(1);
Now that will return
1..3 ok 1 ok 2 ok 2
Does it look similar to the problem we had earlier? Indeed it does, seems like somewhere there is a variable that needs to shared.
Now reading the documentation of
Test::Simple we find out that all magic
is really done inside
Test::Builder, opening up Builder.pm we quickly
find the following lines of code:
my @Test_Results = (); my @Test_Details = (); my $Curr_Test = 0;
Now we would be tempted to add
use threads::shared and
:shared
attribute.
use threads::shared; my @Test_Results : shared = (); my @Test_Details : shared = (); my $Curr_Test : shared = 0;
However
Test::Builder needs to work back to Perl 5.4.4! Attributes
were only added in 5.6.0 and the above code would be a syntax error in
earlier Perls. And even if someone were using 5.6.0,
threads::shared
would not be available for them.
The solution is to use the runtime function
share() exported by
threads::shared, but we only want to do it for 5.8.0 and when threads
have been enabled. So, let's wrap it in a
BEGIN block and an
if.
BEGIN{ if($] >= 5.008 && exists($INC{'threads.pm'})) { require threads::shared; import threads::shared qw(share); share($Curr_Test); share(@Test_Details) share(@Test_Results); }
So, if 5.8.0 or higher and threads has been loaded, we do the runtime
equivalent of
use threads::shared qw(share); and call
share() on
the variables we want to be shared.
Now lets find out some examples of where
$Curr_Test is used. We find
sub ok {} in
Test::Builder; I won't include it here, but only a
smaller version which contains:
sub ok { my($self, $test, $name) = @_; $Curr_Test++; $Test_Results[$Curr_Test-1] = 1 unless($test); }
Now, this looks like it should work right? We have shared $Curr_Test
and
@Test_Results. Of course, things aren't that easy; they never are.
Even if the variables are shared, two threads could enter
ok() at the
same time. Remember that not even the statement
$CurrTest++ is an
atomic operation, it is just a shortcut for writing
$CurrTest = $CurrTest + 1. So let's say two threads do that at the same time.
Thread 1: add 1 + $Curr_Test Thread 2: add 1 + $Curr_Test Thread 2: Assign result to $Curr_Test Thread 1: Assign result to $Curr_Test
The effect would be that $Curr_Test would only be increased by one, not two! Remember that a switch between two threads could happen at ANY time, and if you are on a multiple CPU machine they can run at exactly the same time! Never trust thread inertia.
So how do we solve it? We use the
lock() keyword.
lock() takes a shared
variable and locks it for the rest of the scope, but it is only an
advisory lock so we need to find every place that $Curr_Test is used and
modified and it is expected not to change. The
ok() becomes:
sub ok { my($self, $test, $name) = @_; lock($Curr_Test); $Curr_Test++; $Test_Results[$Curr_Test-1] = 1 unless($test); }
So are we ready? Well,
lock() was only added in Perl 5.5 so we need to
add an else to the BEGIN clause to define a lock function if we aren't
running with threads. The end result would be.
my @Test_Results = (); my @Test_Details = (); my $Curr_Test = 0; BEGIN{ if($] >= 5.008 && exists($INC{'threads.pm'})) { require threads::shared; import threads::shared qw(share); share($Curr_Test); share(@Test_Details) share(@Test_Results); } else { *lock = sub(*) {}; } } sub ok { my($self, $test, $name) = @_; lock($Curr_Test); $Curr_Test++; $Test_Results[$Curr_Test-1] = 1 unless($test); }
In fact, this is very like the code that has been added to
Test::Builder
to make it work nice with threads. The only thing not correct is
ok() as
I cut it down to what was relevant. There were roughly 5 places where
lock() had to be added. Now the test code would print
1..3 ok 1 ok 2 ok 3
which is exactly what the end user would expect. All in all this is a rather small change for this 1291 line module, we change roughly 15 lines in a non intrusive way, the documentation and testcase code makes up most of the patch. The full patch is at
Altering Perls behavior to be thread-safe, ex::threads::cwd
Somethings change when you use threads; some things that you or a module might do are not like what they used to be. Most of the changes will be due to the way your operating system treats processes that use threads. Each process has typically a set of attributes, which include the current working directory, the environment table, the signal subsystem and the pid. Since threads are multiple paths of execution inside a single process, the operating system treats it as a single process and you have a single set of these attributes.
Yep. That's right - if you change the current working directory in one thread, it will also change in all the other threads! Whoops, better start using absolute paths everywhere, and all the code that uses your module might use relative paths. Aaargh...
Don't worry, this is a solvable problem. In fact, it's solvable by a module.
Perl allows us to override functions using the
CORE::GLOBAL namespace.
This will let us override the functions that deal with paths and set the
cwd correctly before issuing the command. So let's start off
package ex::threads::safecwd; use 5.008; use strict; use warnings; use threads::shared; our $VERSION = '0.01';
Nothing weird here right? Now, when changing and dealing with the
current working directory one often uses the
Cwd module, so let us make
the cwd module safe first. How do we do that?
1) use Cwd; 2) our $cwd = cwd; #our per thread cwd, init on startup from cwd 3) our $cwd_mutex : shared; # the variable we use to sync 4) our $Cwd_cwd = \&Cwd::cwd; 5) *Cwd::cwd = *my_cwd; sub my_cwd { 6) lock($cwd_mutex); 7) CORE::chdir($cwd); 8) $Cwd_cwd->(@_); }
What's going on here? Let's analyze it line by line:
- We include
Cwd.
- We declare a variable and assign to it the cwd we start in. This variable will not be shared between threads and will contain the cwd of this thread.
- We declare a variable we will be using to lock for synchronizing work.
- Here we take a reference to the
&Cwd::cwdand store in
$Cwd_cwd.
- Now we hijack
Cwd::cwdand assign to it our own
my_cwdso whenever someone calls
Cwd::cwd, it will call
my_cwdinstead.
my_cwdstarts of by locking $cwd_mutex so no one else will muck. around with the cwd.
- After that we call
CORE::chdir()to actually set the cwd to what this thread is expecting it to be.
- And we round off by calling the original
Cwd::cwdthat we stored in step 4 with any parameters that we were handed to us.
In effect we have hijacked
Cwd::cwd and wrapped it around with a lock
and a
chdir so it will report the correct thing!
Now that
cwd() is fixed, we need a way to actually change the
directory. To do this, we install our own global
chdir, simply like
this.
*CORE::GLOBAL::chdir = sub { lock($cwd_mutex); CORE::chdir($_[0]) || return undef; $cwd = $Cwd_cwd->(); };
Now, whenever someone calls
chdir() our
chdir will be called
instead, and in it we start by locking the variable controlling access,
then we try to chdir to the directory to see if it is possible,
otherwise we do what the real chdir would do, return undef. If it
succeeds, we assign the new value to our per thread
$cwd by calling the
original
Cwd::cwd()
The above code is actually enough to allow the following to work:
use threads use ex::threads::safecwd; use Cwd; chdir("/tmp"); threads->create(sub { chdir("/usr") } )->join(); print cwd() eq '/tmp' ? "ok" : "nok";
Since the
chdir("/usr"); inside the thread will not affect the other
thread's
$cwd variable, so when
cwd is called, we will lock down the
thread,
chdir() to the location the thread
$cwd contains and perform a
cwd().
While this is useful, we need to get along and provide some more functions to extend the functionality of this module.
*CORE::GLOBAL::mkdir = sub { lock($cwd_mutex); CORE::chdir($cwd); if(@_ > 1) { CORE::mkdir($_[0], $_[1]); } else { CORE::mkdir($_[0]); } }; *CORE::GLOBAL::rmdir = sub { lock($cwd_mutex); CORE::chdir($cwd); CORE::rmdir($_[0]); };
The above snippet does essentially the same thing for both
mkdir and
rmdir. We lock the $cwd_mutex to synchronize access, then we
chdir to
$cwd and finally perform the action. Worth noticing here is the check we
need to do for
mkdir to be sure the prototype behavior for it is
correct.
Let's move on with
opendir,
open,
readlink,
readpipe,
require,
rmdir,
stat,
symlink,
system and
unlink. None
of these are really any different from the above with the big exception
of
open.
open has a weird bit of special case since it can take
both a HANDLE and an empty scalar for autovification of an anonymous
handle.
*CORE::GLOBAL::open = sub (*;$@) { lock($cwd_mutex); CORE::chdir($cwd); if(defined($_[0])) { use Symbol qw(); my $handle = Symbol::qualify($_[0],(caller)[0]); no strict 'refs'; if(@_ == 1) { return CORE::open($handle); } elsif(@_ == 2) { return CORE::open($handle, $_[1]); } else { return CORE::open($handle, $_[1], @_[2..$#_]); } }
Starting off with the usual lock and
chdir() we then need to check if
the first value is defined. If it is, we have to qualify it to the
callers namespace. This is what would happen if a user does
open FOO, "+>foo.txt". If the user instead does
open main::FOO, "+>foo.txt",
then Symbol::qualify notices that the handle is already qualified and
returns it unmodified. Now since
$_[0] is a readonly alias we cannot
assign it over so we need to create a temporary variable and then
proceed as usual.
Now if the user used the new style
open my $foo, "+>foo.txt", we need
to treat it differently. The following code will do the trick and
complete the function.
else { if(@_ == 1) { return CORE::open($_[0]); } elsif(@_ == 2) { return CORE::open($_[0], $_[1]); } else { return CORE::open($_[0], $_[1], @_[2..$#_]); } } };
Wonder why we couldn't just assign
$_[0] to
$handle and unify the code
path? You see,
$_[0] is an alias to the
$foo in
open my $foo, "+>foo.txt" so
CORE::open will correctly work.
However, if we do
$handle = $_[0] we take a copy of the undefined
variable and
CORE::open won't do what I mean.
So now we have a module that allows the you to safely use relative paths in most of the cases and vastly improves your ability to port code to a threaded environment. The price we pay for this is speed, since every time you do an operation involving a directory you are serializing your program. Typically, you never do those kinds of operations in a hot path anyway. You might do work on your file in a hot path, but as soon as we have gotten the filehandle no more locking is done.
A couple of problems remain. Performance-wise, there is one big problem
with
system(), since we don't get control back until the
CORE::system() returns, so all path operations will hang waiting for
that. To solve that we would need to revert to XS and do some magic with
regard to the system call. We also haven't been able to override the
file test operators (
-x and friends), nor can we do anything about
qx {}. Solving that problem requires working up and down the optree
using
B::Generate and
B::Utils. Perhaps a future version of the
module will attempt that together with a custom op to do the locking.
Conclusion
Threads in Perl are simple and straight forward, as long as we stay in pure Perl land everything behaves just about how we would expect it to. Converting your modules should be a simple matter of programming without any big wizardly things to be done. The important thing to remember is to think about how your module could possibly take advantage of threads to make it easier to use for the programmer.
Moving over to XS land is altogether different; stay put for the next article that will take us through the pitfalls of converting various kinds of XS modules to thread-safe and thread-friendly levels. | http://www.perl.com/pub/2002/06/11/threads.html | CC-MAIN-2014-52 | en | refinedweb |
Using User-Defined Exceptions
If you want users to be able to programmatically distinguish between some error conditions, you can create your own user-defined exceptions. The .NET Framework provides a hierarchy of exception classes ultimately derived from the base class Exception. Each of these classes defines a specific exception, so in many cases you only have to catch the exception. You can also create your own exception classes by deriving from the ApplicationException class.
When creating your own exceptions, it is good coding practice to end the class name of the user-defined exception with the word "Exception." It is also good practice to implement the three recommended common constructors, as shown in the following example.
Note In situations where you are using remoting, you must ensure that the metadata for any user-defined exceptions is available at the server (callee) and to the client (the proxy object or caller). For example, code calling a method in a separate application domain must be able to find the assembly containing an exception thrown by a remote call. For more information, see Best Practices for Handling Exceptions.
In the following example, a new exception class,
EmployeeListNotFoundException, is derived from System.ApplicationException. Three constructors are defined in the class, each taking different parameters.
Imports System Public Class EmployeeListNotFoundException Inherits ApplicationException Public Sub New() End Sub 'New Public Sub New(message As String) MyBase.New(message) End Sub 'New Public Sub New(message As String, inner As Exception) MyBase.New(message, inner) End Sub 'New End Class 'EmployeeListNotFoundException [C#] using System; public class EmployeeListNotFoundException: ApplicationException { public EmployeeListNotFoundException() { } public EmployeeListNotFoundException(string message) : base(message) { } public EmployeeListNotFoundException(string message, Exception inner) : base(message, inner) { } }
See Also
Using the Try/Catch Block to Catch Exceptions | Using Specific Exceptions in a Catch Block | Best Practices for Handling Exceptions | Exception Handling Fundamentals | http://msdn.microsoft.com/en-us/library/87cdya3t(v=vs.71).aspx | CC-MAIN-2014-52 | en | refinedweb |
From Documentation
ZK 5.0.4 is a release focusing on memory improvements and introducing requested new features. In addition to a significant reduction in memory usage ZK 5.0.4 also introduces many new features such as communication between iFrames, new horizontal and vertical layouts and enhanced components such as the slider and combobox.
Memory usage improvements
In ZK 5.0.4 there have been many optimizations introduced which have significantly reduced the amount of memory consumed by ZK. The memory usage has reduced between 40% and 70% depending on the component tested. When testing with our Sandbox application a grand total of 63% memory was saved by upgrading to ZK 5.0.4.
For more information and the test results please look at ZK 5.0.4's memory improvements.
Refined horizontal and vertical layouts
Two new components, Hlayout and Vlayout have been introduced to give developers more powerful options when laying out their controls. In the implementation of Vlayout and Hlayout, we use HTML Div tags to render its content, so the size of the output is reduced and the rendering time is approximately twice as fast.
For more information please take a look at ZK Component Reference: Hlayout, Vlayout, and Jumper Chen's blog: "Two new layout components in ZK 5.0.4, Hlayout and Vlayout".
Namespace shortcuts
ZK 5.0.4 introduces a namespace shortcuts meaning you do not have to specify the full namespace when writing your ZUL files. For example:
<n:html xmlns: <n:head> </n:head> </n:html>
For a complete list of namespace short cuts please click here.
echoEvent supports any object type
Currently the function call Events.echoEvent(String, Component, String) only supports String type data. However, since ZK 5.0.4 a new function Events.echoEvent(String, Component, Object) was introduced to support any Object type meaning more flexibility for developers.
For more information please take a look at ZK Developer's Reference: Event Firing.
Slider now supports clicking to increase and decrease the value
ZK 5.0.4 introduces new slider functionality enabling users to increment and decrement the slider by clicking on the position they desire. This functionality replicates desktop based slider functionality.
<groupbox mold="3d" width="250px"> <caption label="Default" /> <slider id="slider1" onScroll="zoom(slider1, img1)" /> <image id="img1" src="/img/sun.jpg" width="10px" /> </groupbox>
Calendar supports moving to next and previous months using mouse scrolling
With the introduction of ZK 5.0.4 the calendar component has been enhanced to enable changing months using the mouse scroll wheel, similar to the functionality which is present in Windows.
Radio can now be placed anywhere
Prior to ZK 5.0.4 a Radio control was restricted to having an ancestor which was Radiogroup. However, now with ZK 5.0.4 Radio can be placed anywhere.
<radiogroup id="rg1"/> <radiogroup id="rg2"/> <grid width="300px"> <rows> <row> <radio label="radio 1.1" radiogroup="rg1"/> <radio label="radio 1.2" radiogroup="rg1"/> <radio label="radio 1.3" radiogroup="rg1"/> </row> <row> <radio label="radio 2.1" radiogroup="rg2"/> <radio label="radio 2.2" radiogroup="rg2"/> <radio label="radio 2.3" radiogroup="rg2"/> </row> </rows> </grid>
For more information please take a look at ZK Component Reference: Radiogroup.
Combobox is selectable
In ZK 5.0.4 it is now possible to specify a selected default for the Combobox. The following example demonstrates how to do this.
<combobox id="combobox" width="100px"> <attribute name="onCreate"><![CDATA[ List list2 = new ArrayList(); list2.add("David"); list2.add("Thomas"); list2.add("Steven"); ListModelList lm2 = new ListModelList(list2); lm2.addSelection(lm2.get(0)); combobox.setModel(lm2); ]]></attribute> </combobox>
The lang.xml widget class supports EL
The language XML widget class definition now supports EL enabling dynamic loading of widget classes depending on the situation. For example the following code demonstrates the loading of a widget class dependant upon a property.
<widget-class>${c:property("whatever")}</widget-class>
By use of EL, developers are allowed to provide very dynamic look for different users and conditions. For more information please take a look at Tom Yeh's blog post titled "Totally Different Look per User Without Modifying Application".
Button supports type="submit"
Due to strong demand for integration with legacy application the Button now supports the submit type.
<n:form <textbox/> <button type="submit" label="Submit"/> <button type="reset" label="Reset"/> </n:form>
Merging multiple JavaScript files
You can speed up page load times by combining JavaScript files into as few files as possible. Therefore ZK 5.0.4 introduces functionality enabling you to easily merge JavaScript files.
Notice that the merge of JavaScript files can be done with JSP, DSP or other technologies without this feature. This feature is to provide a system-wide way to minimize the number of JavaScript files.
For more information please take a look at ZK Developer's Reference: Performance Tips, and Jumper Chen's blog post titled "Speed up the loading time of a ZK Application".
Communication between iFrames without server push
If your application contains multiple desktops due to iframes in a portal layout it is now possible to communicate between these instances without the need for server push or timer. It thus minimizes the network traffic.
In ZK 5.0.4 the concept of group has been introduced which enables the use of a group-scope event queue enabling easy communication between the instances. The code below demonstrates some examples:
EventQueue que = EventQueues.lookup("groupTest", EventQueues.GROUP, true); que.subscribe(new EventListener() { public void onEvent(Event evt) { o.setValue(o.getValue() + evt.getData() + "\n"); } }); void publish() { String text = i.getValue(); if (text.length() > 0) { i.setValue(""); que.publish(new Event("onGroupTest", null, text)); } }
For more information please take a look at ZK Developer's Reference: Event Queues.
This feature requires ZK EE
Memory minimization by not maintaining server state
ZK 5.0.4 introduces a new feature called "stub only" which will output the client side resources but will not maintain any server state for them. This is driven by an attribute 「stubonly」 which takes a Boolean value. When set to true the server side state will not be maintained.
This attribute is inherited from its parent therefore the attribute will apply to any children of a parent with stubonly set to true. Please note that after the stub only component』s HTML has been rendered and sent to the client the stub only components cannot be accessed.
The following sample code demonstrates stub only functionality. Please note in the example vbox, hbox, label and textbox are all stub only.
<window title="test of stub-only" border="normal"> <vbox stubonly="true"> <hbox> This is a label at Row 1, Cell 1. <textbox/> Another label at Row 1, Cell 2 (previous textbox is stub-only too) </hbox> <hbox> Another at Row 2, Cell 1 (and the following listbox is not stub-only) <listbox stubonly="false" width="50px"> <listitem label="item1"/> <listitem label="item2"/> </listbox> </hbox> </vbox> </window>
For more information please take a look at ZK Developer's Reference: Performance Tips.
This feature requires ZK EE
Download & other resources
- Download ZK 5 here
- Take a look at ZK 5's release notes here
- View the ZK 5: Upgrade Notes and a real case: Upgrading to ZK 5 | http://books.zkoss.org/wiki/Small%20Talks/2010/August/New%20Features%20of%20ZK%205.0.4 | CC-MAIN-2014-52 | en | refinedweb |
MLP
- Description
- Advantages
- Disadvantages
- Things To Know
- Training Data Format
- Example Code
- Code & Resources
- Documentation
Description
The Multi Layer Perceptron (MLP) algorithm is a powerful form of an Artificial Neural Network that is commonly used for regression (and can also be used for classification ).
The MLP algorithm is a supervised learning algorithm that can be used for both classification and regression for any type of N-dimensional signal.
The MLP algorithm is part of the GRT regression modules.
Advantages
The MLP algorithm is a very good algorithm to use for the regression and mapping. It can be used to map an N-dimensional input signal to an M-dimensional output signal, this mapping can also be non-linear.
Disadvantages
The main limitation of the MLP algorithm is that, because of the way it is trained, it can not guarantee that the minima it stops at during training is the global minima. The MLP algorithm can, therefore, get stuck in a local minima. One option for (somewhat) mitigating this is to train the MLP algorithm several times, using a different random starting position each time, and then pick the model that results in the best RMS error. The number of random training iterations can be set using the
setNumRandomTrainingIterations(UINT numRandomTrainingIterations) method, setting this value to a higher number of random training iterations (i.e. 20) may result in a better classification or regression model, however, this will increase the total training time. Another limitation of the MLP algorithm is that the number of Hidden Neurons must be set by the user, setting this value too low may result in the MLP model underfitting while setting this value too high may result in the MLP model overfitting.
Things To Know
You should always enable scaling with the MLP, as this will give you much better results.
Training Data Format
You should use the LabelledClassificationData data structure to train the MLP for classification and the LabelledRegressionData data structure to train the MLP for regression.
Example Code
This examples demonstrates how to initialize, train, and use the MLP algorithm for regression.
The example loads the data shown in the image below and uses this to train the MLP algorithm. The data consists of the 3-axis gyro data from a Wii-mote, which has been labelled with a 1-dimensional target value (i.e. the value the MLP model should output given the 3-dimensional gyro input). The purpose of this exercise is to see if the MLP algorithm can learn to map the values of the Pitch axis of the gyro between 0 and 1, without the Roll and Yaw values corrupting the mapped output value. The Wii-mote was rotated left (around the Pitch axis) and then several seconds of training data was recorded with the target label set to 0. During this time the Wii-mote was moved around the Roll and Yaw axes, but not around the Pitch axis (as much as possible). The label was then changed to 1 and the Wii-mote was rotated right (around the Pitch axis) and then several seconds of training data was record, again the Wii-mote was moved around the Roll and Yaw axes, but not around the Pitch axis. This process was then repeated again to record the test dataset.
The images below show the recorded gyro and target values for both the training and test datasets. In the training and test images you can see the raw gyro data in the top row (with red = Roll, green = Pitch, blue = Yaw) and the target values in the bottom row.
You can download the training and test datasets in the Code & Resources section below.
Gyro TrainingTrainingDataImage1.jpg
Gyro TestTestDataImage1.jpg
MLP Results Data: This image show the output of the trained MLP (in blue) along with the target value (in green) from the test data set. You can see that the MLP effectively learns to map the 3-axis gyro input to the 1 dimensional target output, although the 'noise' from the Roll and Yaw axes still have some influence on the mapping of the Pitch axis. The RMS error of this test data was 0.06. MLPRegressionOutputResultsImage1.jpg
GRT MLP Regression Example
This examples demonstrates how to initialize, train, and use the MLP algorithm for regression.
The Multi Layer Perceptron (MLP) algorithm is a powerful form of an Artificial Neural Network that is commonly used for regression.
In this example we create an instance of an MLP algorithm and then train the algorithm using some pre-recorded training data.
The trained MLP algorithm is then used to perform regression on the test data.
This example shows you how to:
- Create an initialize the MLP algorithm for regression
- Create a new instance of a GestureRecognitionPipeline and add the regression instance to the pipeline
- Load some LabelledRegressionData from a file
- Train the MLP algorithm using the training dataset
- Test the MLP algorithm using the test dataset
- Save the output of the MLP algorithm to a file
*/
#include "GRT.h"
using namespace GRT;
int main (int argc, const char * argv[])
{
//Turn on the training log so we can print the training status of the MLP to the screen
TrainingLog::enableLogging( true );
//Load the training data
LabelledRegressionData trainingData;
LabelledRegressionData testData;
if( !trainingData.loadDatasetFromFile("MLPRegressionTrainingData.txt") ){
cout << "ERROR: Failed to load training data!\n";
return EXIT_FAILURE;
}
if( !testData.loadDatasetFromFile("MLPRegressionTestData.txt") ){
cout << "ERROR: Failed to load test data!\n";
return EXIT_FAILURE;
}
//Make sure the dimensionality of the training and test data matches
if( trainingData.getNumInputDimensions() != testData.getNumInputDimensions() ){
cout << "ERROR: The number of input dimensions in the training data (" << trainingData.getNumInputDimensions() << ")";
cout << " does not match the number of input dimensions in the test data (" << testData.getNumInputDimensions() << ")\n";
return EXIT_FAILURE;
}
if( testData.getNumTargetDimensions() != testData.getNumTargetDimensions() ){
cout << "ERROR: The number of target dimensions in the training data (" << testData.getNumTargetDimensions() << ")";
cout << " does not match the number of target dimensions in the test data (" << testData.getNumTargetDimensions() << ")\n";
return EXIT_FAILURE;
}
cout << "Training and Test datasets loaded\n";
//Print the stats of the datasets
cout << "Training data stats:\n";
trainingData.printStats();
cout << "Test data stats:\n";
testData.printStats();
//Create a new gesture recognition pipeline
GestureRecognitionPipeline pipeline;
//Setup the MLP, the number of input and output neurons must match the dimensionality of the training/test datasets
MLP mlp;
unsigned int numInputNeurons = trainingData.getNumInputDimensions();
unsigned int numHiddenNeurons = 5;
unsigned int numOutputNeurons = trainingData.getNumTargetDimensions();
//Initialize the MLP
mlp.init(numInputNeurons, numHiddenNeurons, numOutputNeurons);
//Set the training settings
mlp.setMaxNumEpochs( 500 ); //This sets the maximum number of epochs (1 epoch is 1 complete iteration of the training data) that are allowed
mlp.setMinChange( 1.0e-5 ); //This sets the minimum change allowed in training error between any two epochs
mlp.setNumRandomTrainingIterations( 20 ); //This sets the number of times the MLP will be trained, each training iteration starts with new random values
mlp.setUseValidationSet( true ); //This sets aside a small portiion of the training data to be used as a validation set to mitigate overfitting
mlp.setValidationSetSize( 20 ); //Use 20% of the training data for validation during the training phase
mlp.setRandomiseTrainingOrder( true ); //Randomize the order of the training data so that the training algorithm does not bias the training
//The MLP generally works much better if the training and prediction data is first scaled to a common range (i.e. [0.0 1.0])
mlp.enableScaling( true );
//Add the MLP to the pipeline
pipeline.setRegressifier( mlp );
//Train the MLP model
cout << "Training MLP model...\n";
if( !pipeline.train( trainingData ) ){
cout << "ERROR: Failed to train MLP model!\n";
return EXIT_FAILURE;
}
cout << "Model trained.\n";
//Test the model
cout << "Testing MLP model...\n";
if( !pipeline.test( testData ) ){
cout << "ERROR: Failed to test MLP model!\n";
return EXIT_FAILURE;
}
cout << "Test complete. Test RMS error: " << pipeline.getTestRMSError() << endl;
//Run back over the test data again and output the results to a file
fstream file;
file.open("MLPResultsData.txt", fstream::out);
for(UINT i=0; i<testData.getNumSamples(); i++){
vector< double > inputVector = testData[i].getInputVector();
vector< double > targetVector = testData[i].getTargetVector();
//Map the input vector using the trained regression model
if( !pipeline.predict( inputVector ) ){
cout << "ERROR: Failed to map test sample " << i << endl;
return EXIT_FAILURE;
}
//Get the mapped regression data
vector< double > outputVector = pipeline.getRegressionData();
//Write the mapped value and also the target value to the file
for(UINT j=0; j<outputVector.size(); j++){
file << outputVector[j] << "\t";
}
for(UINT j=0; j<targetVector.size(); j++){
file << targetVector[j] << "\t";
}
file << endl;
}
//Close the file
file.close();
return EXIT_SUCCESS;
}
Code & Resources
MLPRegressionExample.cpp MLPRegressionTrainingData.txt MLPRegressionTestData.txt
Documentation
You can find the documentation for this class at MLP documentation. | http://www.nickgillian.com/wiki/pmwiki.php?n=GRT.MLP | CC-MAIN-2014-52 | en | refinedweb |
#include <sys/audio/audio_driver.h> void audio_dev_add_engine(audio_dev_t *adev, audio_engine_t *engine);
void audio_dev_remove_engine(audio_dev_t *adev, audio_engine_t *engine);
pointer to an audio device allocated with audio_dev_alloc(9F)
pointer to an audio engine allocated with audio_engine_alloc(9F)
Solaris DDI specific (Solaris DDI)
The audio_dev_add_engine() function associates an allocated and initialized engine with an audio device.
Multiple engines may be added to an audio device in this fashion. Usually device drivers perform this at least twice: once for a playback engine and once for a record engine. Multiple playback engines can be especially useful in allowing the framework to avoid software mixing overhead or to support engines running with different parameters. For example, different engines may support different audio formats or different sample rates.
Generally, audio_dev_add_engine() should be called before registering the audio device with the audio framework using audio_dev_register(9F).
The audio_dev_remove_engine() function removes an engine from the list of engines associated with a device. This is generally called during detach(9E) processing.
These functions may be called from user or kernel context only.
See attributes(5) for descriptions of the following attributes:
attributes(5), audio(7D) , detach(9E), audio_dev_alloc(9F), audio_dev_register(9F), audio_dev_unregister(9F), audio_engine_alloc(9F) | http://docs.oracle.com/cd/E36784_01/html/E36886/audio-dev-remove-engine-9f.html | CC-MAIN-2014-52 | en | refinedweb |
Am Sonntag, 12. November 2006 14:11 schrieb Sepherosa Ziehau: > I assume you have failed to installworld before this compiling error. > If my assuming is correct then add following lines after last #include > ... in src/usr.sbin/makewhatis/makewhatis.c: > #ifndef MACHINE > #define MACHINE "pc32" > #endif > > Don't forget to nuke it once you have "a whole new world" :-) > > BTW, before you installworld, please make sure to: > export MACHINE=pc32 > Thank Sascha for this tip :-) > > Best Regards, > sephe Sephe, thanks, that fixed it ;-) Thomas | http://leaf.dragonflybsd.org/mailarchive/bugs/2006-11/msg00030.html | CC-MAIN-2014-52 | en | refinedweb |
28 May 2010 16:07 [Source: ICIS news]
SHANGHAI (ICIS news)--Bayer MaterialScience (BMS) is confident that its 2010 full-year earnings before interest, taxation, depreciation and amortisation (EBITDA) will be double those of the previous year, the company’s CEO said on Friday.
BMS’s Patrick Thomas said he expected the second quarter to be slightly better than the first, while the 2010 full-year results would double that of 2009.
He was interviewed at the opening of BMS’s AutoCreative centre in ?xml:namespace>
The driver of the company’s growth in 2010 would be polycarbonates (PC), owing to the boom in electronics, office equipment and flat-screened TV panels, Patrick Thomas said.
“
“Meanwhile, optic disc storage is also performing surprisingly well,” he said.
Each year BMS purchases $3bn-4bn (€2.4bn-3.2bn) worth of benzene, toluene and phenol for feedstock, which leaves the company open to risks from volatile aromatics prices, he said.
“The downside risk is only on high material cost,” said Thomas.
The company is a major producer of polyurethanes (PU), toluene diisocyanate (TDI) and methylene di-p-phenylene (MDI).
PU demand had seen speculative growth in
“This year the growth won’t be that much and will be closer to GDP. If demand remains the same as [the first quarter], the annual growth of PU demand from the automotive sector in
The company’s CEO said: “As the world’s largest consumer and manufacturer of automobiles,
($1 = €0.81)
For more on Bayer MaterialScience visit ICIS company intelligence
For more on polycarbon | http://www.icis.com/Articles/2010/05/28/9363431/bayer-materialscience-to-double-2010-ebitda-ceo-patrick.html | CC-MAIN-2014-52 | en | refinedweb |
I was having a look around the Instructables site, and saw some Matrix screen makers.
I like writing computer programs, and one time decided to make one of these, and I am going to show you how!
You must have the Microsoft .NET Framework 3.5 installed to do this.
Please rate, it is my first instructable, and I want to know how I go.
**UPDATE**
If you do not have the Microsoft.NET Framework 3.5, you can easily download it from the Microsoft Download site (download.microsoft.com), and search for .NET 3.5.
I have made a new version that spits out random characters, instead of just numbers.
It DOES NOT show a screenshot of the matrix, or show a 3D screen. Just random letters. In green.
Step 1: Coding
You need to download the code file attached, and save it in to your my documents folder. If you are interested in computer programming, this program might be interesting to look at. You need to copy all of the italic text, and save it to a file called Program.txt.
using System; namespace Matrix_V2 { class Program { static void Main(string[] args) {
//Sets the text color to green
Console.ForegroundColor = ConsoleColor.Green;
//Create a string with some random characters
string random_characters = "£¤¥¦§¨©ª«¬®¯±²³´µ¶·¸¹ºΣΤΦΩαβδεμπστφABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz<,>.?/:;\"\'{[}]\\|`~0123456790-_=+!@#$%^&*() ";
//Get all of those characters and turn them into an "array"
char[] random_characters_array = random_characters.ToCharArray();
//Clear the screen Console.Clear();
//Writes details about the application to the console screen
Console.Title = "Matrix V2 - Press Ctrl+C to exit";
Console.WriteLine("Matrix V2");
Console.WriteLine("Written by Chris Ward");
Console.WriteLine("");
Console.Write("Press any key to continue");
Console.ReadKey();
//Creates a pseudo-random generator
Random r = new Random();
//Creates a statement that runs forever
while (true) {
//Gets the ASCII character from the array, based on what the number is
Console.Write(random_characters_array[r.Next(random_characters.Length)]);
//then runs the statement again... and again... etc.
} } } }
so i have to have the frame work in order for this to work, and i really dont want just random as letters and numbers, does it look exactly like it in the picture?
My batch file won't change into a exe file. Is this because I have microsoft.net framework 4.5?
#no data and keypress, fast running
#maker error fixes(like spaces)
# link:
Than you!
I removed author data and "press enter" to start
it runs, closes just like its supposed to. but no file is created. HELPPPP
The directory name is invalid.
Press any key to continue . . .
HELPP
Type cd Location of files
Press enter
copy the line for the compiler, the really long one.
when I run the program a window opens (windows\sistem32.cmd.exe) and is says press any ket to continue . . . and if I press any thing it just closes. What is posibly the problem?
error
warning CS1691: '1702/errorreporting:prompt' is not a valid warning numer
Thank you Michael Fitzgerald
Note: Currently Running Windows Vista Home Premuim
/nowarn:1701,1702(*)/errorreport:prompt
Program.txt(23,8):error CS0116: A namespace does not directly contain members such as feilds or methods
Program.txt(23,42):error CS1022: Type of namespace definition, or end-of-the-file expected
Can you help at all? | http://www.instructables.com/id/Make-a-Matrix-Screen-with-Pseudo-Random-number-gen/ | CC-MAIN-2014-52 | en | refinedweb |
Chatlog 2010-04-15
From RDFa Working Group Wiki
See CommonScribe Control Panel, original RRSAgent log and preview nicely formatted version.
13:50:28 <RRSAgent> RRSAgent has joined #rdfa 13:50:28 <RRSAgent> logging to 13:50:43 <manu> trackbot, start meeting 13:50:46 <trackbot> RRSAgent, make logs world 13:50:48 <trackbot> Zakim, this will be 7332 13:50:48 <Zakim> ok, trackbot; I see SW_RDFa()10:00AM scheduled to start in 10 minutes 13:50:49 <trackbot> Meeting: RDFa Working Group Teleconference 13:50:49 <trackbot> Date: 15 April 2010 13:51:35 <manu> Present: Ivan, Steven, MarkB, Manu, Benjamin, Knud, Shane 13:51:41 <manu> Regrets: BenA, Toby 13:51:43 <manu> Chair: Manu 13:52:12 <manu> rrsagent, make logs public 13:53:24 <markbirbeck> markbirbeck has joined #rdfa 13:58:54 <Zakim> SW_RDFa()10:00AM has now started 13:59:01 <Zakim> +Benjamin 13:59:38 <Zakim> +??P9 13:59:48 <manu> zakim, I am ??P9 13:59:48 <Zakim> +manu; got it 14:00:17 <ivan> zakim, dial ivan-voip 14:00:17 <Zakim> ok, ivan; the call is being made 14:00:18 <Zakim> +Ivan 14:00:37 <ShaneM> ShaneM has joined #rdfa 14:01:15 <markbirbeck> zakim, code? 14:01:16 <Zakim> the conference code is 7332 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), markbirbeck 14:01:27 <Knud> Knud has joined #rdfa 14:01:49 <Zakim> +knud 14:01:56 <Zakim> +markbirbeck 14:02:21 <Steven> zakim, dial steven-617 14:02:21 <Zakim> ok, Steven; the call is being made 14:02:22 <Zakim> +Steven 14:03:15 <manu> Agenda: 14:03:31 <Zakim> +ShaneM 14:04:08 <manu> scribenick: ivan 14:04:42 <ivan> Topic: Resolutions on FPWD Items 14:04:56 <ivan> manu: a couple of resolutions should be on records, 14:05:02 <ivan> ... get the issues closed 14:05:12 <ivan> ... and have a resolution on getting fpwd-s 14:05:27 <manu> 14:05:28 <ivan> manu: we had a poll that we did not record 14:05:41 <Knud> zakim, mute me 14:05:41 <Zakim> knud should now be muted 14:05:44 <ivan> manu: this covered the four items that had a wide agreement 14:05:53 <ivan> ... first: supporting of @profiles 14:06:13 <ivan> ... looking at it there were 2 against, we covered their reasons 14:06:21 <ivan> ... we should not rehash that 14:06:36 <manu> PROPOSAL: Support the general concept of RDFa Profiles - an external document that specifies keywords for CURIEs. 14:07:29 <ivan> ivan: +1 14:07:38 <manu> +1 14:07:38 <Benjamin> +1 14:07:41 <Knud> +1 14:07:47 <Steven> +1 14:07:50 <markbirbeck> +1 14:07:56 <Steven> This is not a vote - it's a straw poll that demonstrates rough consensus among the RDFa WG. 14:07:57 <ShaneM> +1 14:08:09 <manu> RESOLVED: Support the general concept of RDFa Profiles - an external document that specifies keywords for CURIEs. 14:08:38 <manu> PROPOSAL: Support the concept of having a default prefix mechanism without RDFS resolution. 14:08:41 <ivan> ivan: +1 14:08:50 <manu> +1 14:08:50 <Benjamin> +1 14:08:51 <Knud> +1 14:08:55 <Steven> +1 14:08:59 <markbirbeck> +1 14:09:16 <ShaneM> +1 14:09:26 <manu> RESOLVED: Support the concept of having a default prefix mechanism without RDFS resolution. 14:10:09 <manu> PROPOSAL: Support expressing the RDFa Profile document in RDFa (for example: rdfa:prefix/rdfa:keyword, or rdfa:alias) 14:10:16 <ivan> ivan: +1 14:10:18 <Steven> +1 14:10:19 <manu> +1 14:10:23 <Benjamin> +1 14:10:32 <Knud> +1 14:11:12 <ShaneM> +1 14:11:13 <markbirbeck> -1 14:12:19 <ivan> steven: mark, do you oppose this proposal? 14:12:37 <ivan> mark: I would be fine if we changed it to 'one of the possible mechanism would be rdfa' 14:12:49 <ivan> ... I think we can still have that discussion 14:13:10 <ivan> manu: we had a bit of discussions with that wording and we had a general discussion based on that - we don't want to change the proposal at this point because there is wide agreement to this wording and it could impact FPWD. 14:13:22 <ivan> ... looking at the proposal and the +1-s I would resolve it and we can have a discussion at a later stage 14:13:25 <manu> RESOLVED: Support expressing the RDFa Profile document in RDFa (for example: rdfa:prefix/rdfa:keyword, or rdfa:alias) 14:14:12 <manu> PROPOSAL: Provide an alternate mechanism to express mappings that does not depend on xmlns: (for example: @token, @vocab or @map) 14:14:20 <ivan> ivan: +1 14:14:25 <manu> +1 14:14:26 <Benjamin> +1 14:14:29 <Knud> +1 14:14:32 <Steven> -1 14:14:32 <markbirbeck> +1 14:14:50 <ivan> ivan: same question to steven... does he oppose or can live with it? 14:15:18 <ivan> steven: I was not sure whether I should say -1 or 0, an alternate means 'as well as' 14:15:27 <ShaneM> +1 14:15:27 <ivan> manu: this is really for languages without @xmlns: 14:15:52 <ivan> ... and whether or not namespaces exist in html5 at the conceptual level is debatable, but the WHATWG folks are claiming so 14:16:03 <ivan> ... the vast majority of our arguments over namespaces and @xmlns: (RDFa doesn't require either) revolved around that 14:16:14 <ivan> ... we want RDFa to be used in languages that do not have @xmlns: or namespaced elements 14:16:20 <ivan> ... for those languages, @prefix makes more sense than @xmlns: 14:16:21 <ShaneM> Moreover using xmlns pollutes the namespaces of a parser unnecessarily. 14:16:40 <ivan> Steven: I do not agree that html5 does not fall into this category 14:16:43 <ivan> q+ 14:16:48 <manu> ack ivan 14:17:58 <manu> RESOLVED: Provide an alternate mechanism to express mappings that does not depend on xmlns: (for example: @token, @vocab or @map) 14:18:12 <ivan> ivan: What about deprecating @xmlns:? 14:18:19 <manu> Topic: Deprecation of @xmlns: in RDFa 1.1 14:18:21 <ivan> ... it is in the current version of RDFa Core 14:18:42 <manu> +1 for deprecation of xmlns: 14:18:49 <Steven> -1 for deprecation 14:18:50 <manu> Ivan: I can live with deprecation of xmlns: 14:19:08 <manu> Ivan: we need a resolution for this if we are going to have it in RDFa Core 1.1 FPWD 14:19:16 <ivan> shane: I did this offline, asked Manu, he agreed and we added the text in there 14:19:32 <ivan> ... I agree that this should be discussed by the WG 14:19:38 <ivan> ... since having two is confusing 14:19:53 <ivan> manu: the reason I thought we would be going this direction is because we've had this discussion before in RDFa WG - Whether or not to deprecate xmlns: 14:20:02 <ivan> ... the issue is confusing - having two equal prefixing mechanisms 14:20:09 <ivan> ... we've also talked about the namespace issues - how RDFa doesn't need namespaces and how using xmlns: confuses a great number of people. 14:20:10 <Steven> I disagree more strongly on this one than the last 14:20:20 <ivan> ... If we had known what we know now about the confusion xmlns: creates in regular web developers. Some people still think that RDFa requires namespaces (even though RDFa doesn't require namespaces). Back in RDFa 1.0, when we re-used xmlns:, we would have probably defined a new attribute instead of re-using @xmlns: if we know what we know now (which is impossible)... looks like we'll need to discuss this in much more depth, then. 14:20:30 <ivan> steven: I am against deprecating it 14:20:30 <markbirbeck> q+ 14:20:44 <ivan> ... I do not like breaking backward compatibility 14:20:48 <ivan> manu: it does not break backward compatibility 14:21:01 <ivan> ... deprecation means a strong a signal not to use 14:21:15 <ivan> shane: technically it means it is not removed yet but it can be 14:21:31 <manu> ack mark 14:21:36 <ivan> ... steven, if it said 'prefix is preferred, is that fine'? 14:21:38 <ivan> steven: yes 14:21:46 <ivan> mark: 'deprecated' means there is a decision to remove it in the future 14:21:56 <ivan> ... we have to send a strong signal 14:22:48 <ivan> ... I do not agree that we would have not used xmlns: - done it differently 14:23:01 <manu> q+ to clarify "we'd do it differently" 14:23:10 <ivan> ... at the time we used what w3c had an emphasis on at the time - xmlns: - now things have changed, not as much of an emphasis on namespaces and xmlns: - we made the right decision in the context of what was going on at the time. 14:23:20 <ShaneM> +1 to marks concern 14:23:54 <ivan> ack manu 14:23:54 <Zakim> manu, you wanted to clarify "we'd do it differently" 14:23:54 <Steven> +1 to Mark 14:24:59 <Knud> "xmlns is discouraged"? 14:25:09 <markbirbeck> +1 to Knud 14:25:46 <Zakim> -ShaneM 14:25:47 <Zakim> +ShaneM 14:26:06 <ivan> PROPOSAL: the FPWD should say something like "prefix is preferred" but not explicitly deprecate xmlns 14:26:20 <ShaneM> +1 14:26:21 <manu> +1 14:26:22 <ivan> ivan: +1 14:26:28 <Knud> +1 14:26:31 <Benjamin> +1 14:26:31 <Steven> I can live with that 14:28:31 <markbirbeck> 0 14:28:35 <ivan> markbirbeck: That doesn't send a very strong message, does it? 14:29:52 <manu> PROPOSAL: Remove mention of "xmlns: is deprecated" from the RDFa Core 1.1 FPWD 14:30:08 <manu> +1 14:30:08 <ivan> ivan: +1 14:30:10 <markbirbeck> +1 14:30:23 <Knud> +1 14:30:23 <Benjamin> +1 14:30:24 <Steven> +1 14:30:35 <ShaneM> +1 14:30:36 <ivan> RESOLVED: Remove mention of "xmlns: is deprecated" from the RDFa Core 1.1 FPWD 14:30:45 <ivan> manu: We will have to discuss this in more depth and reach some kind of consensus about deprecating xmlns: after the FPWDs are out there. 14:31:03 <manu> Topic: Resolve to Publish RDFa Core 1.1 and XHTML+RDFa 1.1 FPWD 14:31:16 <ivan> manu: shane, an overview? 14:31:46 <ivan> shane: as far as can see, modulo pubrules, the document is in agreement with the resolutions of the group 14:31:56 <ivan> ... fpwd does not have to be perfect 14:32:17 <ivan> ... xhtml did not have the same review as core, but that is all right, not much changed there since XHTML+RDFa 1.0 :-) 14:32:26 <ivan> Ivan: I have concerns about the core and not publishing RDFa DOM API at the same time 14:32:42 <ivan> ... as soon as we put it out to the public, we will have the public reacting negatively to not publishing RDFa DOM API with the other two documents. 14:32:40 <ivan> manu: Shane, got a link to the RDFa Core 1.1 and XHTML+RDFa 1.1 documents? 14:32:55 <ShaneM> 14:33:11 <ShaneM> 14:33:20 <ShaneM> 14:33:25 <ivan> q+ 14:33:34 <manu> ack ivan 14:34:08 <manu> PROPOSAL: Publish RDFa Core 1.1 as First Public Working Draft 14:34:53 <manu> Ivan: Are we going to publish RDFa DOM API now as well? 14:35:37 <manu> Ivan: I think people might misunderstand the publishing RDFa DOM API at a later date as something negative. 14:35:46 <manu> q+ to discuss RDFa DOM API publication 14:35:50 <markbirbeck> q+ 14:36:19 <manu> Ivan: I'm concerned that people may think we're not concerned about the RDFa DOM API - we do care about it, very much. 14:36:23 <manu> ack markbirbeck 14:36:30 <ivan> mark: I can understand your concern, Ivan 14:36:33 <ivan> ... but I disagree 14:36:52 <ivan> ...the audience to this spec is very different 14:37:12 <ivan> .. my feeling is that the rdfa core and the xhtml will go unnoticed by general web developers. 14:37:21 <ivan> ... but RDFa itself is the story and it's evolved 14:37:30 <ivan> ... however the dom api is a different audience, different story - audience is parser developers 14:37:41 <ivan> ... RDFa DOM API is really aimed at web developers and we really think we should aim it at the html authors 14:37:42 <ivan> q+ 14:37:46 <manu> ack manu 14:37:46 <Zakim> manu, you wanted to discuss RDFa DOM API publication 14:37:51 <ivan> manu: I agree with mark 14:38:13 <ivan> ... i do not want us to get into mind set where we think that all of these specs must be published at the same time. 14:38:23 <ivan> ... We shouldn't create artificial ties between the documents that do not exist. 14:38:33 <ivan> ... but, let's suppose that all of Ivan's fears come true - bad community backlash due to a misunderstanding of where our priorities are 14:38:41 <ivan> ... we have to have courage, and take the heat if that happens 14:38:54 <ivan> ... we are not talking about pushing the dom api by a couple of months, we are talking about slipping publication by two weeks. 14:39:07 <ivan> ... if slipping the date by two weeks ends up resulting in nasty remarks about the RDFa WG 14:39:19 <ivan> ... those nasty remarks will be invalidated after two weeks time - when we publish the RDFa DOM API document 14:39:22 <manu> ack ivan 14:40:15 <markbirbeck> Fair point Ivan. I was bending the stick too far. :) 14:41:35 <manu> Ivan: I hope I'm being paranoid - and I wouldn't object to FPWD. 14:41:52 <manu> Ivan: I think these are the same audiences - we've changed some pretty major stuff. 14:42:10 <Zakim> +knud 14:42:16 <manu> PROPOSAL: Publish RDFa Core 1.1 as First Public Working Draft 14:43:05 <manu> +1 14:43:05 <ivan> ivan: +0.5 14:43:06 <markbirbeck> +1 14:43:07 <Benjamin> +1 14:43:10 <Knud> +1 14:43:11 <ShaneM> +1 14:43:11 <markbirbeck> :) 14:43:22 <Steven> +1 14:43:36 <manu> RESOLVED: Publish RDFa Core 1.1 as First Public Working Draft 14:44:00 <manu> PROPOSAL: Publish XHTML+RDFa 1.1 as First Public Working Draft 14:44:04 <manu> +1 14:44:04 <Steven> +1 14:44:06 <Benjamin> +1 14:44:07 <markbirbeck> +1 14:44:09 <Knud> +1 14:44:13 <ivan> ivan: +0.5 (just to be consistent) 14:44:23 <markbirbeck> I was wondering what you'd do. :) 14:44:24 <ShaneM> +1 14:44:29 <manu> RESOLVED: Publish XHTML+RDFa 1.1 as First Public Working Draft 14:45:14 <ivan> manu: Great job guys on these FPWD! Many thanks to Shane who worked tirelessly to get these documents into shape over the past several weeks! 14:46:45 <ivan> clap clap clap 14:46:50 <ivan> wohooo 14:46:52 <ivan> etc 14:46:56 <markbirbeck> Nice work Shane! 14:47:08 <ivan> Topic: RDFa DOM API 14:47:25 <ivan> manu: I have not put the API on the focus on the agendas for the past two months and I'm afraid that has put us in this situation of not being able to publish RDFa DOM API FPWD along with RDFa Core and XHTML+RDFa - so let's put all of our focus on RDFa DOM API now... get it to FPWD quickly. 14:47:48 <ivan> ... Benjamin, Mark and and I had discussion on how to improve it 14:47:54 <markbirbeck> q+ To apologise for causing delay on DOM API. 14:48:02 <ivan> ... what we want to do is to focus solely on the dom api for the coming 2 weeks 14:48:21 <Benjamin> Current version of the RDFa DOM API document: 14:48:29 <ivan> mark: apologize for causing delay, I was away with no internet connection... 14:48:44 <Benjamin> And the latest version of the Javascript prototype: 14:48:55 <ivan> ... the key issue I am trying to push this towards 14:49:18 <ivan> ... we should give people an api to select the elements of the dom that resulted in a triple in the triple store 14:49:32 <ivan> ... I put something up today for us to discuss 14:49:47 <ivan> manu: the concern I had is that I cannot implement element tracking in Firefox using the librdfa parser 14:50:04 <ivan> ... i know we are talking about an rdfa api 14:50:22 <ivan> ... but it will be very difficult to implement that for implementers that don't have access to the core DOM document object 14:50:31 <ivan> ... i do not know how to implement that in c and c++ in Firefox. 14:50:38 <ivan> mark: i think it is pretty easy 14:50:46 <ivan> manu: i would like to see some code 14:50:58 <ivan> ... if we can implement it in the c and c++ in Firefox, then we should have the feature. 14:51:11 <ivan> mark: this raises the question what we want to achieve with this api 14:51:26 <ivan> ... just querying triples is not really useful 14:51:54 <ivan> manu: that is not what i mean; if we want people to write Firefox extensions that modify the dom and give them extra methods - if we can't do that in a Firefox extension, we have a problem. 14:52:12 <ivan> ... this is usually done is c and c++, and we especially have this issue with the new @profile attribute. 14:52:34 <ivan> ... I do not think you can do it in pure javascript - dereference external @profile documents. 14:52:41 <ivan> ... this is not about implementing it in Redland, you can do that easily. 14:52:57 <ivan> ... it is about the restrictions that Firefox and Chrome put on their extension writers 14:53:15 <ivan> mark: if we want to do something for the in-browser developers, we have to see what is useful to those developers - tying to elements is very useful. 14:53:18 <manu> +1 to what Mark just said. 14:53:28 <ivan> ... we may need an additional thing in the api 14:53:44 <ivan> ... maybe we need some events that get passed 14:53:54 <ivan> ... we have to try to solve this rather than drop it 14:54:30 <ivan> manu: with that said, do you have examples of extending the Document object in Firefox? Not using Javascript - but with C/C++? 14:55:02 <ivan> markbirbeck: we had all kinds of things experimented with in our xforms work, there are lots of stuff we looked at 14:55:18 <ivan> manu: are you opposed getting just triples in javascript? 14:55:44 <ivan> markbirbeck: i do not have a problem with some kind of layering 14:55:55 <ivan> ... eg in sparql you have the notion of projection 14:56:08 <ivan> ... the result is the set of results with all kinds of properties 14:56:16 <ivan> ... you get back objects 14:56:32 <ivan> ... that is natural for js programmers 14:56:34 <ivan> q+ 14:56:37 <Benjamin> The current API version may be easily extended to query DOM nodes with certain RDFa content. 14:56:37 <ivan> ack markbirbeck 14:56:37 <Zakim> markbirbeck, you wanted to apologise for causing delay on DOM API. 14:56:38 <manu> ack mark 14:57:07 <ivan> markbirbeck: i have not looked at other languages, we may have a language specific holes where objects can be used 14:57:22 <ivan> ... and languages should fill that in - use whatever makes sense natively - objects in object-oriented languages. 14:57:36 <ivan> ... but all objects should have a pointer at that element where the triple comes from 14:57:59 <Benjamin> q+ 14:58:23 <ivan> ... we get both the semantics and the element that produced that 14:58:26 <manu> ack ivan 14:59:38 <manu> q+ to discuss triples-as-objects 14:59:41 <Benjamin> -1 to Ivans proposal 14:59:48 <manu> ack benjamin 15:00:04 <manu> Ivan: We don't have to provide every feature when doing a FPWD - do we really need this in there. 15:00:13 <ivan> Benjamin: The RDFa DOM API is not in a publish-able state right now - we cannot publish it today 15:00:26 <ivan> ... I think we should reach a concensus about the general style of the document 15:00:49 <ivan> ... we should get a feeling for what the api would look like 15:00:51 <manu> q- 15:00:55 <manu> q+ to end the telecon 15:01:04 <ivan> manu: we can add mark's proposal to this and see how it works together with the stuff that's already in there. 15:01:10 <ShaneM> Remember that published documents have their own momentum... Once it starts rolling in a certain direction it is hard to change. The faster it rolls the harder it is to redirect. 15:01:40 <ivan> manu: mark, what would help us most is to give us examples 15:01:47 <ivan> ... see how we can have this happen 15:01:53 <ivan> meeting adjourned 15:02:10 <Zakim> -markbirbeck 15:02:12 <Zakim> -Steven 15:02:14 <Zakim> -knud 15:02:20 <Zakim> -Benjamin 15:02:31 <Knud> +1 to what Shane just said 15:02:50 <markbirbeck> +1.5 15:03:00 <markbirbeck> (I'm using up the bits that Ivan didn't use. :)) # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000315 | http://www.w3.org/2010/02/rdfa/wiki/index.php?title=Chatlog_2010-04-15&oldid=163 | CC-MAIN-2014-52 | en | refinedweb |
@Documented @Retention(value=RUNTIME) @Target(value=TYPE) public @interface AutoClone
Cloneableclasses. The
@AutoCloneannotation instructs the compiler to execute an AST transformation which adds a public
clone()method and adds
Cloneableto the list of interfaces which the class implements.
Because the JVM doesn't have a one-size fits all cloning strategy, several
customizations exist for the cloning implementation. By default, the
clone()
method will call
super.clone() before calling
clone() on each
Cloneable property of the class.
Example usage:
import groovy.transform.AutoCloneWhich will create a class of the following form:
@AutoCloneclass Person { String first, last List favItems Date since }
class Person implements Cloneable { ... public Object clone() throws CloneNotSupportedException { Object result = super.clone() result.favItems = favItems.clone() result.since = since.clone() return result } ... }Which can be used as follows:
def p = new Person(first:'John', last:'Smith', favItems:['ipod', 'shiraz'], since:new Date()) def p2 = p.clone() assert p instanceof Cloneable assert p.favItems instanceof Cloneable assert p.since instanceof Cloneable assert !(p.first instanceof Cloneable) assert !p.is(p2) assert !p.favItems.is(p2.favItems) assert !p.since.is(p2.since) assert p.first.is(p2.first)In the above example,
super.clone()is called which in this case calls
clone()from
java.lang.Object. This does a bit-wise copy of all the properties (references and primitive values). Properties like
firsthas type
Stringwhich is not
Cloneableso it is left as the bit-wise copy. Both
Dateand
ArrayListare
Cloneableso the
clone()method on each of those properties will be called. For the list, a shallow copy is made during its
clone()method.
If your classes require deep cloning, it is up to you to provide the appropriate
deep cloning logic in the respective
clone() method for your class.
If one of your properties contains an object that doesn't support cloning
or attempts deep copying of a data structure containing an object that
doesn't support cloning, then a
CloneNotSupportedException may occur
at runtime.
Another popular cloning strategy is known as the copy constructor pattern.
If any of your fields are
final and
Cloneable you should set
style=COPY_CONSTRUCTOR which will then use the copy constructor pattern.
Here is an example making use of the copy constructor pattern:
import groovy.transform.AutoClone import static groovy.transform.AutoCloneStyle.*Which will create classes of the following form:
@AutoClone(style=COPY_CONSTRUCTOR)class Person { final String first, last final Date birthday }
@AutoClone(style=COPY_CONSTRUCTOR)class Customer extends Person { final int numPurchases final List favItems }
class Person implements Cloneable { ... protected Person(Person other) throws CloneNotSupportedException { first = other.first last = other.last birthday = other.birthday.clone() } public Object clone() throws CloneNotSupportedException { return new Person(this) } ... } class Customer extends Person { ... protected Customer(Customer other) throws CloneNotSupportedException { super(other) numPurchases = other.numPurchases favItems = other.favItems.clone() } public Object clone() throws CloneNotSupportedException { return new Customer(this) } ... }If you use this style on a child class, the parent class must also have a copy constructor (created using this annotation or by hand). This approach can be slightly slower than the traditional cloning approach but the
Cloneablefields of your class can be final.
As a variation of the last two styles, if you set
style=SIMPLE
then the no-arg constructor will be called followed by setting the
individual properties (and/or fields) calling
clone() if the
property/field implements
Cloneable. Here is an example:
import groovy.transform.AutoClone import static groovy.transform.AutoCloneStyle.*Which will create classes as follows:
@AutoClone(style=SIMPLE)class Person { final String first, last final Date birthday }
@AutoClone(style=SIMPLE)class Customer { final List favItems }
class Person implements Cloneable { ... public Object clone() throws CloneNotSupportedException { def result = new Person() copyOrCloneMembers(result) return result } protected void copyOrCloneMembers(other) { other.first = first other.last = last other.birthday = birthday.clone() } ... } class Customer extends Person { ... public Object clone() throws CloneNotSupportedException { def result = new Customer() copyOrCloneMembers(result) return result } protected void copyOrCloneMembers(other) { super.copyOrCloneMembers(other) other.favItems = favItems.clone() } ... }You would typically use this style only for base classes where you didn't want the normal
Object
clone()method to be called and you would typically need to use the
SIMPLEstyle for any child classes.
As a final example, if your class already implements the
Serializable
or
Externalizable interface, you can choose the following cloning style:
which outputs a class with the following form:which outputs a class with the following form:
@AutoClone(style=SERIALIZATION)class Person implements Serializable { String first, last Date birthday }
class Person implements Cloneable, Serializable { ... Object clone() throws CloneNotSupportedException { def baos = new ByteArrayOutputStream() baos.withObjectOutputStream{ it.writeObject(this) } def bais = new ByteArrayInputStream(baos.toByteArray()) bais.withObjectInputStream(getClass().classLoader){ it.readObject() } } ... }This will output an error if your class doesn't implement one of
Serializableor
Externalizable, will typically be significantly slower than the other approaches, also doesn't allow fields to be final, will take up more memory as even immutable classes like String will be cloned but does have the advantage that it performs deep cloning automatically.
Further references on cloning:
AutoCloneStyle,
AutoExternalize
public abstract String[] excludes
NOTE: When using the
CLONE style, property (and/or field) copying might occur as part of
calling
super.clone() which will ignore this list. You can then use this list to
streamline the provided
clone() implementation by selecting which Cloneable properties
(and/or fields) will have a subsequent call to their
clone() method. If you have
immutable properties (and/or fields) this can be useful as the extra
clone() will
not be necessary and cloning will be more efficient.
NOTE: This doesn't affect property (and/or field) copying that might occur as part
of serialization when using the
SERIALIZATION style, i.e. this flag is ignored;
instead adjust your serialization code to include or exclude the desired
properties (and/or fields) which should carry over during cloning.
public abstract boolean includeFields
NOTE: When using the
CLONE style, field copying might occur as part of
calling
super.clone() and might be all you require; if you turn on
this flag, the provided
clone() implementation will also
subsequently call
clone() for each
Cloneable field which can be
useful if you have mutable fields.
NOTE: This doesn't affect field copying that might occur as part of
serialization when using the
SERIALIZATION style, i.e. this flag is ignored;
instead adjust your serialization code to include or exclude your fields.
public abstract AutoCloneStyle style | http://groovy.codehaus.org/api/groovy/transform/AutoClone.html | CC-MAIN-2014-52 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.