text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
32160/getting-error-while-deleting-my-vpc AttributeError: 'ec2.ServiceResource' object has no attribute 'delete_vpc' This is the error I am getting. I am using the following code: import boto3 ec2 = boto3.resource('ec2') ec2client.delete_vpc(VpcId = 'vpc-id') Can someone help me with this? This is the code to delete the VPC. Make sure you don't delete the default VPC as it is created by Amazon and can't be recreated by you. import boto3 ec2 = boto3.resource('ec2') ec2client = ec2.meta.client ec2client.delete_vpc(VpcId = 'vpc-id') This way you can delete the vpc. Hope this helps add the server_name directive to your Nginx ...READ MORE Due to some technical difficulties on the ...READ MORE The problem is having wrong mod on ...READ MORE For the first error you should add ...READ MORE The error clearly says the error you ...READ MORE Here is the code to attach a ...READ MORE Here is a simple implementation. You need ...READ MORE You can view this answer here : Before ...READ MORE You can visit the link here and ...READ MORE The error says you have entered a ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/32160/getting-error-while-deleting-my-vpc?show=32161
CC-MAIN-2021-04
refinedweb
197
79.46
PROBLEM LINK: Contest Division 1 Contest Division 2 Contest Division 3 Contest Division 4 Practice Setter: Utkarsh Darolia Testers: Nishank Suresh and Abhinav sharma Editorialist: Utkarsh Darolia DIFFICULTY 1742 PREREQUISITES Sorting PROBLEM There are N soldiers in an army and the i^{th } soldier has 3 parameters — Attack points : A_i, Defense points : 1000-A_i and Soldier type : ATTACK or DEFENSE. For the whole army: - Attack value of the army is defined as the sum of attack points of all ATTACKtype soldiers. - Defense value of the army is defined as the sum of defense points of all DEFENSEtype soldiers. - Rating of the army is defined as the product of Attack value and Defense value of the army. The task is to assign the soldier type to each of the soldiers to maximize the rating of the army and find out that maximum rating. EXPLANATION To solve this problem, let’s first try solving a subproblem, which is to find the maximum achievable rating if you are given the count of soldiers of ATTACK and DEFENSE type. Let’s define rating(r) as the maximum possible rating that can be achieved, given that we can assign any r soldiers to ATTACK type and the remaining n-r soldiers to DEFENSE type. The solution to this subproblem can be constructed as follows — - Sort the Attack points array A in descending order. - Assign first r soldiers to ATTACKtype and remaining n-r to DEFENSEtype. - So, Attack points of every soldier of ATTACKtype are greater than or equal to attack points of every soldier of DEFENSEtype. Proof that the proposed approach gives the assignment with maximum rating A[i] are Attack points of soldier at index i and D[i] are the Defence points of soldier at index i. After following the proposed approach, we can divide the soldiers into 2 groups Atk and Def — - Atk_{1}, Atk_{2}, …, Atk_{r} are indexes of the soldiers of ATTACKtype. - Def_{1}, Def_{2}, …, Def_{n-r} are indexes of the soldiers of DEFENSEtype. - A[Atk_{i}] \geq A[Def_{j}] and D[Atk_{i}] \leq D[Def_{j}] (1 \leq i \leq r ; 1 \leq j \leq n-r) Let’s assume that the proposed approach is wrong and there is a better value of rating(r) that can be achieved by swapping types of soldiers Atk_{i} and Def_{j} (1 \leq i \leq r ; 1 \leq j \leq n-r). Let that the current Attack value and Defense value are Atkval and Defval respectively. Then the new values after swapping the types of soldiers would be — - New Attack value =Atkval-(A[Atk_{i}]-A[Def_{j}]). As A[Atk_{i}] \geq A[Def_{j}], New Attack value is lesser than or equal to Atkval. - New Defense value =Defval-(D[Def_{j}]-D[Atk_{i}])=. As D[Atk_{i}] \leq D[Def_{j}], New Defense value is lesser than or equal to Defval. As both New Attack value and New Defense value are lesser than or equal to Atkval and Defval respectively, the New Rating would also be lesser than or equal to the initial rating. So, we have a contradiction as we couldn’t get a better value of rating(r) and hence the approach that we proposed would always give the maximum value of rating(r). Let B be the array that we got by sorting Attack points array A in descending order, Sum(r) be the sum of attack points of first r soldiers in array B and TotSum be the total sum of attack points of all soldiers. We can simplify the expression of rating(r) using the steps below — - Attack value \times Defense value - [B_{1}+B_{2}...+B_{r}] \times [(1000-B_{r+1})+(1000-B_{r+2})...+(1000-B_{n})] - [Sum(r)] \times [(1000*(n-r))-(B_{r+1}+B_{r+2}...+B_{n})] - [Sum(r)] \times [(1000*(n-r))-(TotSum-Sum(r))] Our final answer would be maximum value of rating(r) where r ranges from 0 to n. This can be implemented by iterating over array B and finding Sum(r) for every r as Sum(r-1)+B_{r} and putting all required values into the expression of rating(r) formulated above. TIME COMPLEXITY The overall time complexity is dependent on the sorting technique used. Everything except sorting can be done in O(N). If we use fast sorting techniques, then an overall time complexity O(Nlog(N)) per test case can be achieved. SOLUTIONS Editorialist's Solution // Utkarsh Darolia #include <bits/stdc++.h> using namespace std; int main() { ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); int t; cin >> t; for (int i = 1; i <= t; ++i) { int n; cin >> n; int TotSum = 0, Sum = 0; int A[n]; for (int j = 0; j < n; ++j) { cin >> A[j]; TotSum += A[j]; } sort (A, A+n, greater<int>()); //greater<int>() is used for sorting in descending order long long finalAnswer = 0; for (int r = 0; r < n; ++r) { finalAnswer = max (finalAnswer,(long long)(Sum)*((1000*(n-r))-(TotSum-Sum))); Sum += A[r]; } cout << finalAnswer << endl; } return 0; } Feel free to share your approach. Suggestions are welcome.
https://discuss.codechef.com/t/armtrn-editoriale/102033
CC-MAIN-2022-33
refinedweb
863
57.71
Proof of Existence Chainscript Segments can be notarized in a blockchain as a proof of existence or a proof of anteriority. Proofs of existence are often backed by a Merkle proof and a blockchain transaction with the Merkle proof in it (eg in the case of Bitcoin, it would be using OP_RETURN). The merkle proof can be saved in the meta.evidences of the Segment. This allows others to validate the audit trail of the Segment using methods that are publicly accessible. Merkle trees A Merkle tree is a data structure that is able to take n number of hashes and represent it with a single hash. A Merkle tree proof is generated by taking the hash of one leaf of the tree and hashing it with its neighbouring hash. The parent hash then hashes with its neighbour until the root hash is calculated. Leaf hashes can be verified against the root hash by following the tree from leaf and neighbour back to the root. Image via Wikipedia. Learn More about Merkle trees on Wikipedia. Merkle Proofs The Merkle root represents all the hashes in the tree where if one hash is modified the entire tree is invalid. Thus, with the Merkle tree path corresponding with the Blockchain transaction, one can guarantee the proof-of-existence of the unmodified dataset. From the Merkle proof the path can be validated by concatenating the left and right hash, and validating the result against the parent. This step is repeated until the Merkle root is found. The transaction hash and blockchain will be included in the Objective Evidence for validating the Merkle root. As an example, for the Bitcoin Blockchain, the Merkle root is stored using the OP_RETURN opt code. The format is the following: { "merkleRoot": "855e019f67bc5ba09b7efb3a9e169bbe14e6abafa5b02d686607e25107db1b4c", "merklePath": [ { "left": "fa871e81c469fa947eacd40f89dc5627a0cb3a96551a651c034787c752d4448e", "right": "aa1331b6f1155fff5865116e5adcb6e4cdf0435fc2bd68053a4d24795f47dad3", "parent": "39b43545134a903dd16be3d0b5391474ef82241acd33e8496b103ee8d378efea" }, { "left": "39b43545134a903dd16be3d0b5391474ef82241acd33e8496b103ee8d378efea", "right": "9fb3164966231fca20c75a562d0b4c1e7adc68738ac6c179dd1ecf921845dbfd", "parent": "fedacd347e84d39ab3122f78d762dbbc79646704deb90fbb118b140c85f4df96" }, { "left": "d7fa0ed22e4b58eb227a826a1d2f0ef564f5052172bd36fe612a2fa214d452c8", "right": "fedacd347e84d39ab3122f78d762dbbc79646704deb90fbb118b140c85f4df96", "parent": "7e7dec576382e948f4980e4295f6ef9b89748e4586723a7b9481ee20cfb14db5" }, { "left": "2c0ada2bdea45977acd0a4d56fda6f970f34df32a1e507e99cd7284c0038b22a", "right": "7e7dec576382e948f4980e4295f6ef9b89748e4586723a7b9481ee20cfb14db5", "parent": "fe7214757ed9d701a2ed112d01c1b17a734094444ee067dd2ee4c14907419a24" }, { "left": "dae80a5aa2b397285ce6ca079c78dad737099d293a8783d4b8680efff57e921e", "right": "fe7214757ed9d701a2ed112d01c1b17a734094444ee067dd2ee4c14907419a24", "parent": "5f9343488c91f22a0cfea5c9797e244c1a6d20872048e157e515a1172678ebcf" } ] } You can verify that a Merkle proof is valid on your own without using any proprietary tools. It uses standard cryptographic functions which are available in many programming languages. Here is a script to do it using Node.js: import crypto from 'crypto'; // The `canonicaljson` package is required. // You can install it with `npm install canonicaljson`. import stringify from 'canonicaljson'; // Computes the canonical hash of a JSON object. const hashJson = obj => { const hash = crypto.createHash('sha256'); hash.update(stringify(obj)); return hash.digest(); } // Returns whether the Merkle proof of a Chainscript state is valid. const verifyMerkleProof = segment => { const hash = hashJson(segment.link); const evidence = segment.meta.evidences[0]; const merklePath = evidence.merklePath; // Check levels one by one for (let i = 0; i < merklePath.length; i++) { const level = merklePath[i]; const left = new Buffer(level.left, 'hex'); if (level.right) { const right = new Buffer(level.right, 'hex'); } // Make sure hash is in the current level if (hash.compare(left) !== 0 && (!right || hash.compare(right) !== 0)) { return false; } // Compute parent hash let parent; if (right) { parent = crypto .createHash('sha256') .update(Buffer.concat([left, right])) .digest(); } else { parent = left; } // Make sure parent hash is corrent if (parent.compare(new Buffer(level.parent, 'hex')) !== 0) { return false; } hash = parent; } // Make sure root is correct return hash.compare(new Buffer(evidence.merkleRoot, 'hex')) === 0; } export verifyMerkleProof;
https://indigoframework.com/documentation/v0.2.0/references/proof-of-existence/
CC-MAIN-2018-09
refinedweb
546
57.98
Dear Rooters, Fitting some functions to histograms I come across some strange behaviour, when the binning of the histogram is automatically set by root. Here a small example reproducing the problem: from ROOT import TCanvas,TH1D,TRandom2 h1 = TH1D('h1','',4,0,0) #h1 = TH1D('h1','',4,-2,2) #use this instead to make it work #just something to put into the histogram: rg = TRandom2() for i in range(100): h1.Fill(rg.Gaus(0,1)) c1 = TCanvas('c1','',600,600) h1.Draw() h1.Fit('gaus') f = h1.FindObject('gaus') #printing the reference: for i in range (10): print f - The fit is not drawn - The TF1 Object from h1.FindObject(…) seems to be lost during the running program. Printing the reference sometimes results in ‘none’. (Perhaps the script has to be rerun a few times, to reproduce this) Regards, Stefan
https://root-forum.cern.ch/t/automatic-histogram-bins-and-fit/10699
CC-MAIN-2022-27
refinedweb
141
65.22
You should read through the Doxygen documentation on the BehaviorBase class. If you're rusty or new to C++, there is a C++ review available, which goes into more detail regarding the boilerplate given below. First, use your favorite code editor to create a file named SampleBehavior.h (or whatever you want to call it) in your project directory. By convention, we've been naming our classes <Name>Behavior for behaviors, <Name>MC for MotionCommands, or <Name>Control for Controls Set up the usual C++ boilerplate, inheriting from the BehaviorBase class. This class defines the interface which all Behaviors must follow, and provides some simple housekeeping code. //-*-c++-*-#ifndef INCLUDED_SampleBehavior_h_#define INCLUDED_SampleBehavior_h_#include "Behaviors/BehaviorBase.h"class SampleBehavior : public BehaviorBase {public: SampleBehavior() : BehaviorBase("SampleBehavior") {} virtual void DoStart() { } virtual void DoStop() { }};#endif If this looks scary to you, look over the C++ review. These functions define the BehaviorBase interface. You will see -*-c++-*- comments at the top of many of our header files - this is a flag to the Emacs editor that it should use C++ mode instead of C mode for syntax hilighting and formatting. You can ignore it. At this point, you have a valid behavior. All that remains is to add a call to it from the framework so you can activate it. The easy way to do that is to edit project/UserBehaviors.h. You will need to add two lines, one to #include your header file, and another to add a menu item. This file is #included by project/StartupBehavior_SetupModeSwitch.cc, which lists all of "Mode Switch" menu entries. You could instead put your behavior in that file, following the example of the demo behaviors already listed there. Assuming you have already configured Environment.conf, now just go to the root Tekkotsu directory and type make. If you have a multi-core machine, you can pass -jN to make, where N is the number of cores on your machine. This will allow make to compile multiple files in parallel. If you are building for the Aibo, make update will compile and copy to the memory stick. Then just put the memory stick into the Aibo and turn it on! (Be sure to "unpause" by double-tapping the back button to turn off the emergency stop mode once it boots up.) On other platforms, if you are compiling off-board the robot, you will need to copy the executable tekkotsu-TGT to the robot. See the documentation specific for your robot or the compilation section of the download page. You can put a cout in the DoStart() and DoStop() functions to make "Hello world". (Remember, on the Aibo telnet to port 59000 on the robot to see the output. On other platforms, the output will appear on the console where you launched Tekkotsu.) If adding cout gives you compilation problems, don't forget that cout is part of the std namespace. Adding sound is really easy. Take a look at SoundManager to make a dog-like version of "Hello World" (a.k.a. "Woof! Woof!"). There are already sounds included in project/ms/data/sound. Shawn Turner of the University at Albany has written two tutorials, Introduction to Behaviors using Tekkotsu and Writing Finite State Automata in Tekkotsu which may be of interest. (Local cached PDFs are available from the Media page) Part 2: Adding Behavior Functionality will show you one way to control the LEDs.
http://tekkotsu.org/FirstBehavior.html
CC-MAIN-2017-17
refinedweb
569
56.25
Gradient; the procedure is then known as gradient ascent. Gradient descent is also known as steepest descent, or the method of steepest descent. When known as the latter, gradient descent should not be confused with the method of steepest descent for approximating integrals. Description Gradient descent is based on the observation that if the real-valued function is defined and differentiable in a neighborhood of a point , then decreases fastest if one goes from in the direction of the negative gradient of F at , . It follows that, if for γ > 0 a small enough number, then . With this observation in mind, one starts with a guess for a local minimum of F, and considers the sequence such that We have so hopefully the sequence converges to the desired local minimum. Note that the value of the step size γ is allowed to change at every iteration. This process is illustrated in the picture to the right. Here F is assumed to be defined on the plane, and that its graph has a bowl shape. The blue curves are the contour lines, that is, the regions on which the value of F is constant. A red arrow originating at a point shows the direction of the negative gradient at that point. Note that the (negative) gradient at a point is orthogonal to the contour line going through that point. We see that gradient descent leads us to the bottom of the bowl, that is, to the point where the value of the function F is minimal. Examples Gradient descent has problems with pathological functions such as the Rosenbrock function shown here. The Rosenbrock function has a narrow curved valley which contains the minimum. The bottom of the valley is very flat. Because of the curved flat valley the optimization is zig-zagging slowly with small stepsizes towards the minimum. The gradient ascent method applied to : Gradient descent can also be used to solve a system of nonlinear equations. Below is an example that shows how to use the gradient descent to solve for three unknown variables, x1, x2, and x3. This example shows one iteration of the gradient descent. Consider a nonlinear system of equations: There must be an initial assumption as to what the solutions for x1, x2, and x3 are. In this case assume that all the solutions are zero. The next step is to form the Jacobian matrix of the given system of equations. Plug in the initial assumption of zero into the Jacobian matrix and original system to get the answers below. Now the gradient and z0 must be found. This is done by using the formula below and the previous two calculations made. The value for z0 is found by simply calculating the p-norm of the gradient, where p is equal to 2. Now the value for z must be found. The next step is to find solutions for g1, g2, and g3. First there must be a value chosen for α1. Set α1 to zero and α3 to one. There is a condition that must be met when the solutions for g1 and g3 are found. That condition is that g3 must be less than g1 and if this is true we accept α1 and α3. If this is not true then a different value must be chosen for α3, one which will satisfy this condition. Try choosing a smaller value for α3 like 1/2 or 1/4. To find α2 all that needs to be done is to divide α3 by 2 → α2 = α3 / 2 . With α2 found, g2 can be calculated. The next step is to form a Newton polynomial, Newton's forward divided difference interpolating polynomial. Using all the calculations of α1, α2, α3, g1, g2, and g3 the unknown values of h1, h2, and h3 can be found, which is shown below. With all the values for h1, h2, and h3 known, the next step is to plug them into the polynomial and solve for α. A value for g0 must be found and must satisfy the condition of being less than g1 and g3. If this condition is met then use the following equation below to solve for x(1) which involves the initial x(0), α which becomes α0, and z. And now there is a set of solutions that satisfy the system of nonlinear equations. Gradient descent works in spaces of any number of dimensions, even in infinite-dimensional ones. In the latter case the search space is typically a function space, and one calculates the Gâteaux derivative of the functional to be minimized to determine the descent direction. Two weaknesses of gradient descent are: - The algorithm can take many iterations to converge towards a local minimum, if the curvature in different directions is very different. - Finding the optimal γ per step can be time-consuming. Conversely, using a fixed γ can yield poor results. Methods based on Newton's method and inversion of the Hessian using conjugate gradient techniques are often a better alternative. A more powerful algorithm is given by the BFGS method which consists in calculating on every step a matrix by which the gradient vector is multiplied to go into a "better" direction, combined with a more sophisticated line search algorithm, to find the "best" value of γ. Gradient descent is in fact Euler's method for solving ordinary differential equations applied to a gradient flow. As the goal is to find the minimum, not the flow line, the error in finite methods is less significant. A computational example The gradient descent algorithm is applied to find a local minimum of the function f(x)=x4-3x3+2 , with derivative f'(x)=4x3-9x2. Here is an implementation in the C programming language. #include <stdio.h> #include <stdlib.h> #include <math.h> int main () { // From calculation, we expect that the local minimum occurs at x=9/4 // The algorithm starts at x=6 double xOld = 0; double xNew = 6; double eps = 0.01; // step size double precision = 0.00001; while (fabs(xNew - xOld) > precision) { xOld = xNew; xNew = xNew - eps*(4*xNew*xNew*xNew-9*xNew*xNew); } printf ("Local minimum occurs at %lg\n", xNew); } With this precision, the algorithm converges to a local minimum of 2.24996 in 70 iterations. A more robust implementation of the algorithm would also check whether the function value indeed decreases at every iteration and would make the step size smaller otherwise. One can also use an adaptive step size which may make the algorithm converge faster. See also References - Mordecai Avriel (2003). Nonlinear Programming: Analysis and Methods. Dover Publishing. ISBN 0-486-43227-0. - Jan A. Snyman (2005). Practical Mathematical Optimization: An Introduction to Basic Optimization Theory and Classical and New Gradient-Based Algorithms. Springer Publishing. ISBN 0-387-24348-8 This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer)
http://www.answers.com/topic/gradient-descent
crawl-002
refinedweb
1,157
63.09
NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | FILES | EXIT STATUS | ATTRIBUTES | SEE ALSO | NOTES In a traditional disk set configuration, multiple hosts are physically connected to the same set of disks. When one host fails, the other host has exclusive access to the disks. The metaset command administers sets of disks shared for exclusive (but not concurrent) access among such hosts. While disk sets enable a high-availability configuration, Solaris Volume Manager itself does not actually provide a high-availability environment. A single-node disk set configuration manages storage on a SAN or fabric-attached storage, or provides namespace control and state database replica management for a specified set of disks. A multi-node disk set configuration, created with the -M option to metaset, provides for a disk set with shared ownership among multiple hosts. All owners can simultaneously access disks in the set. This option exists to support high-availability applications and does not attempt to protect against overlapping writes. Protection against overlapping writes is the responsibility of the application that issues the writes. Multi-node disk sets do not support RAID 5 volumes or transactional volumes (trans metadevices). Shared metadevices and hot spare pools can be created only from drives which are in the disk set created by metaset. To create a set, one or more hosts must be added to the set. To create metadevices within the set, one or more devices must be added to the set. The drivename specified must be in the form cxtxdx with no slice specified. When you add a new disk to any disk set, Solaris Volume Manager checks the disk format. If necessary, it repartitions the disk to ensure that the disk has an appropriately configured reserved slice (slice 7 on a VTOC labelled device or slice 6 on an EFI labelled device), with adequate space for a state database replica. The precise size of slice 7 (or slice 6 on an EFI labelled device), depends on the disk geometry. For traditonal disk sets, the slice is no less than 4 Mbytes, and probably closer to 6 Mbytes, depending on where the cylinder boundaries lie. For multi-node disk sets, the slice is a minimum of 256 Mbytes. The minimal size for the reserved slice: Slice must start at sector 0 Slice must include enough space for disk label State database replicas cannot be mounted and does not overlap with any other slices, including slice 2 If the existing partition table does not meet these criteria, Solaris Volume Manager repartitions the disk. A might).).. The following options are supported: Addsioned. Also, a drive is not accepted if it cannot be found on all hosts specified as part of the set. This means that if a host within the specified set is unreachable due to network problems, or is administratively down, the add fails. Adds (-a) or deletes ( two mediator hosts can be specified for the named disk set. For deleting a mediator host, specify only the nodename of that host as the option to -m. In a single metaset command you can add or delete two mediator hosts. See EXAMPLES. Specify auto-take status for a disk set. If auto-take is enabled for a set, the disk set is automatically taken at boot, and file systems. Insures. Do not interact with the Cluster Framework when used in a Sun Cluster 3 environment. In effect, this means do not modify the Cluster Configuration Repository. These options should only be used to fix a broken disk set configuration. This option is not for use with a multi-node Deletes. This option fails on a multi-node disk set if attempting to withdraw the master node while other nodes are in the set. Forces. The -f option is also used to delete the last drive in the disk set, because this drive would implicitly contain the last state database replica. The -f option is also used for deleting hosts from a set. When specified with a partial list of hosts, it can be used for one-host administration.. Specifies. Joins a host to the owner list for a multi-node disk set. The concepts of take and release, used with traditional disk sets, do not apply to multi-node sets, because multiple owners are allowed. As a host boots and is brought online, it must go through three configuration levels to be able to use a multi-node set. First, it must be included in the cluster nodelist, which happens automatically in a cluster or single-node sitatuion. Second, it must be added to the multi-node disk set with the -a -h options documented elsewhere in this man page. Finally,-node sets that the host has been added to. In a single node situation, joining the node to the disk set starts any necessary resynchronizations. Sets the size (in blocks) for the metadevice state database replica. The length can only be set when adding a new drive; it cannot be changed on an existing drive. The default (and maximum) size is 8192 blocks, which should be appropriate for most configurations. The minimum size of the length is 64 blocks. Specifies that the diskset to be created or modified is a multi-node disk set that supports multiple concurrent owners. The -M option is required when creating a multi-node disk set, but optional on all other operations on a multi-node disk set. Existing disk sets cannot be converted to multi-node sets. Returns. This option is not for use with a multi-node disk set. Releases ownership of a disk set. All of the disks within the set are released. The metadevices set up within the set are no longer accessible. This option is not for use with a multi-node disk set. Specifies the name of a disk set on which metaset works. If no setname is specified, all disk sets are returned. Takes ownership of a disk set safely. If metaset finds that another host owns the set, this host is-node disk set. Withdraws a host from the owner list for a multi-node disk set. The concepts of take and release, used with traditional disk sets, do not apply to multi-node sets, because multiple owners are allowed. Instead of releasing a set, a host may may still appear joined from other hosts in the set until a reconfiguration cycle occurs. The command metaset -w withdraws from ownership of all multinode sets that the host is a member of. This option fails if attempting. This example defines a disk set.. This example adds drives to a disk set.. Note that there is no slice identifier ("sx") at the end of the drive names. The following command adds two mediator hosts to the specified disk set. The following command purges the disket relo-red from the node: This example defines a multi-node disk set. The name of the disk set is blue. The names of the first and second hosts added to the set are hahost1 and hahost2, respectively. The hostname is found in /etc/nodename. Adding the first host creates the multi-node disk set. A disk set can be created with just one host, with additional hosts added later. The last host cannot be deleted until all of the drives within the set have been deleted. Contains list of metadevice configurations.aroot(1M), metastat(1M), metasync(1M), metattach(1M), md.cf(4), md.tab(4), mddb.cf(4), attributes(5) Solaris Volume Manager Administration Guide Disk set administration, including the addition and deletion of hosts and drives, requires all hosts in the set to be accessible from the network. NAME | SYNOPSIS | DESCRIPTION | OPTIONS | EXAMPLES | FILES | EXIT STATUS | ATTRIBUTES | SEE ALSO | NOTES
http://docs.oracle.com/cd/E19683-01/817-3937/6mjgeaft8/index.html
CC-MAIN-2016-44
refinedweb
1,285
64
QDialog doesn't close Hi, I'm trying to close a QDialog from a QMessageBox using the following code: (); There is no error message, but t doesn't close or do anything. The messagebox is generated by the same QDialog I'm trying to close and it was opened usint Additem.exe(). What am I doing incorrectly? Thank you for your help. Do you see the debug message? Will say, is your code not entering the ifclause or is just the reject()call ignored? @micland It is entering the if clause but ignores the reject(). It also ignores close(), finished(), accept(). No error message just does nothing. It is entering the if clause but ignores the reject(). It also ignores close(), finished(), accept(). No error message just does nothing. Ok so the button is correctly evaluated. Did you eventually override QDialog::closeEvent(...)? @gabor53 No you don't have to - but if you do you can prevent that the dialog gets closed by rejecting the event. That was my question if you did this accidentaly. So I have no idea what your problem could be... :-/ - mrjj Lifetime Qt Champion last edited by hi is the messagbox code inside the QDialog ? there is no way it can ignore this->close(); ??? unless u override void QDialog::closeEvent(QCloseEvent * e) and ignore it. Hi, Do you mind providing the whole function? Is it a slot? How is it called/connected? As a side question, why don't you use the standard buttons? Kind regards. @kshegunov Hi, I tried the standard buttons but they didn't work either. Here is the whole function: void Additem:) { { QSqlDatabase db = QSqlDatabase::addDatabase ("QSQLITE","add"); db.setDatabaseName ("C:/Programming/Projects/FolkFriends/db.db"); bool OKadd = db.open (); QSqlQuery querys(db); // if(!db.open ()) if(OKadd == false) { qDebug() << "The database is not open (submit)!"; } else { qDebug() << "The database is open (submit)!"; } // qDebug() << "Original data " <<byteArray.size() << "bytes."; // qDebug() <<"Byte size: " << byteArray.size(); // qDebug() << "sID (from FunctAdd): " << sIDr; // qDebug() << "name (from FunctAdd): " << namer; querys.prepare("INSERT INTO Items (ID, Name, Pic, Description, Month, Day, Year, History, Age, Notes, Color, Material, Signed, What) VALUES(:ID, :Name, :Pic, :Description, :Month, :Day, :Year, :History, :Age, :Notes, :Color, :Material, :Signed, :What)" ); querys.bindValue (":ID",sIDr); querys.bindValue (":Name",namer); querys.bindValue (":Pic",byteArrayr); querys.bindValue (":Description",descriptionr); querys.bindValue (":Month",monthr); querys.bindValue (":Day",dayr); querys.bindValue (":Year",yearr); querys.bindValue (":History",historyr); querys.bindValue (":Age",ager); querys.bindValue (":Notes",notesr); querys.bindValue (":Color",newColorr); querys.bindValue (":Material",newMaterialr); querys.bindValue (":Signed",newSignedbyr); querys.bindValue (":What",newWhatr); bool result = querys.exec (); if(!result) { (); } } db.close (); } QSqlDatabase::removeDatabase ("add"); } void Additem::close() { qDebug() << "Close was called"; Additem::reject (); } Thank you. @gabor53 you no need to do that for use QMessageBox. The easy could be (for me) to call messageBox from conditionnal error test, and use the buttons allready provided (if need to use/add button, it steal easy to): bool error(true); QString my_datas("test"); if(error) QMessageBox::warning(this, tr("my title window"), tr("my error message: %1").arg(my_datas)); also, QMessageBox (like all Dialog box by default) will send back a int answer type object. So "switch()" C++ function should catch answer back after close. And also, you can give name of your buttons directly from warning member function, et the end...). Look at API for QMessageBox, there is some examples. Also, if your code is outside of messageBox (that seems to be the fact in your code), first, your condition will be read after the messagebox closed... if you want to read things before, so you have to put the code inside (so inherit QMessageBox for that). or use "QObject::connect" method for pass signal from QMEssageBox to a slot who will do what you want other. But if it is just for close QMEssageBox, you absolutly not need all of this you try to do, but only call QMessageBox as i show you, it will have OK button allready inside, and it will work directly out of the "QMessageBox"... if not, there is a problem. Also... if you want to cath QMessageBox buttons events, you have to first declare a pointer on QMessageBox for use it from a QObject::connect (and your class has to have QOBJECT macro inside declaration somewhere for ba able to use connections). example: QMessageBox *mgx; mgx = new QMessageBox(this); mgx.warning(this, tr("my title window"), tr("my text")); mgx.setStyleSheet("QLabel{color: green; font-size: 16;} QWidget{background-color: white;}"); // all the CSS style you want to give to your content... should go there in CSS style code with some features like wich object should have wich CSS "paint" connect(mgx, SIGNAL(buttonClicked(QAbstractButton), this, SLOT(on_OK_mgx_clicked(QAbstractButton)); mgx->open(); also, you have to create the function member declared as private (or public) slot, and then define it in cpp file (void on_OK_mgx_clicked(QAbstractButton button); in header file, then define it in source file linked. that way you will catch button clicked returned by QAbstractButton object from clecked event in QMessageBox. But i think it is not what you are searching for... this is more complictae than use default mechanism for print a message with a close button. If you want more features, try first like that, play around after read the API on that, and ask more precisely what you try to do with QMessageBox (what is the finality). Hope that help. @jerome_isAviable Thanks. The thing is I want to close the dialog called Additem that called QMessageBox. QMessageBox closes wonderfully but any attempt to close Additem is disregarded. Are you sure msgBox.clickedButton()isn't NULL? This might happen if the message box was dismissed with a key. @kshegunov How can I check? @gabor53 As you'd check any other pointer: QAbstractButton * clickedButton = msgBox.clickedButton(); if (!clickedButton) { qDebug() << "Well, we are out of luck. " << "The message box was dismissed with a key so this check: " << "msgBox.clickedButton () == OKButton is meaningless" << endl; } @kshegunov I did the following (before you posted your last message): if(msgBox.clickedButton () == NULL) { qDebug() << "ClickButton is null"; } else { qDebug() << "ClickButton is NOT null."; } According to this code the clickedButton() is not NULL. @gabor53 Okay then, going to the basics. Can you confirm that the this->reject()is called? That is, do you get the debug message "OK was clicked.". Another thing I just noticed: Why do you have an overload for reject()? This is usually not necessary, can you show that function and its declaration? Kind regards. @gabor53 ok, well i understand better. so you need to use it like that: QMessageBox mgx; int answer = mgx.warning(this, "my fix title", "my fix text"); if(answer == Qt::accept()) this->close(); that's all. or with a question to validate: int answer = QMessageBox::question( this, tr("Question tag..."), tr("read the error:/n%1 ").arg(error_string), tr("Close"), tr("Not close")); if(answer == 1) this->close(); and... don't forget to tell us (for share for other who will have same problem) finally what solution you choosed and worked for you and make the subject as "resolved" (said thank you for helpers time is also a nice option i like sometimes to read or write...). thank you. @jerome_isAviable I implemented this: {; int answer = msgBox.information (this, tr("Confirmation"), tr("<b><font size = '16' color = 'green'>The Friend was added to the database. To add more press CTRL A."), QMessageBox::Close); if(answer == QMessageBox::Close) { this->close(); } } db.close (); } } There are no error messages, and it enters the right loop (displays " qDebug() << "Entered FunctAdd OK loop.";) but the Additem dialog is still not closed. I have no idea why. - jerome_isAviable last edited by @gabor53 try to use default message box button (not add QMessageBox::Close in information) and use condition if(answer == 0) instead. But... the principe is that, if your messagebox is modal, then the message box is waiting for you push button "OK" before the code ga ahead... right ? so finally, condition serve nothing (there is only one button... condition serve for what ?). finaly, just don't use condition but this->close() after QMessageBox::information. Also, no need to declare QMessageBox mgx in this situation, directly call QMessageBox::information(....); declare a variable for contain an object (QMessageBox or other) serve for add stuff/options and use it in other situation you not need here. so the code you need should also be: QMessageBox::information (this, tr("Confirmation"), tr("<b><font size = '16' color = 'green'>The Friend was added to the database. To add more press CTRL A.")); this->close(); as soon as possible: do it simple, do it easy. on my codes (if i well understand what you are doing), i use a QMessageBox::question for ask if the user want to create an other one new object, if he want i clear entries or not for that or if it is finish. Then from there, i catch the answer (who is the number of button pushed) for call action like close or not, and function for delete entries of this dialog box or not. So when i'm waiting for know what the user want to do next, i ask him what him want. (i not said it is a good idea... there is many... but... it is an idea). Did you try the code i share with you with QMessageBox::question ? this one works sure. also, for see what int answer give you, you could try to add a "qDebug() << answer;" The retrurn depend of the function you call and the button you push. This would help you to see what to catch in condition for do what you want (i do like that when i'm not sure or when something not works like i would like...). @jerome_isAviable Thank you. I really understand how this work. I guess the problem is that this->close() doesn't work no matter how I write QMessageBox. It doesn't even work, when there is no QMessageBox. I guess the problem is that this->close() doesn't work no matter how I write QMessageBox. It doesn't even work, when there is no QMessageBox. Which means there's no problem with QMessageBoxbut with the rest of your code. void Additem::close() { qDebug() << "Close was called"; Additem::reject (); } If close is properly called then you'd see "Close was called" in the debug window. If you don't see that, then close()isn't called. Use the debugger, put a breakpoint at the line where reject()is supposed to be called, if you don't hit the breakpoint, then you have your answer - the code is not executed at all. Another thing, Additem::reject();is a pretty strange way of calling members. Why are you using the fully qualified name to call the method, do you have a reject()override? @kshegunov I see the "Close was called." message and it goes to reject(). But nothing happens. @gabor53 You did not answer my question though. Additem::reject(); is a pretty strange way of calling members. Why are you using the fully qualified name to call the method, do you have a reject() override? @kshegunov Sorry. Sorry. I don't have a reject() override. I used the fully qualified name because I tried to use the name different ways just in case one version works. But none of these work: reject() this->reject() Additem::reject() That's unusual. Please provide the full class declaration of Additemand how you show the dialog. @kshegunov Additem declaration (the whole Additem.h): #ifndef ADDITEM_H #define ADDITEM_H #include "review.h" #include "mainwindow.h" #include <QBoxLayout> #include <QDialogButtonBox> #include <QCalendarWidget> #include <QComboBox> #include <QCoreApplication> #include <QDate> #include <QDebug> #include <QDialog> #include <QDialogButtonBox> #include <QDir> #include <QDirModel> #include <QFileSystemModel> #include <QFileDialog> #include <QFlags> #include <QFocusEvent> #include <QFont> #include <QFormLayout> #include <QGridLayout> #include <QIODevice> #include <QLabel> #include <QLineEdit> #include <QListView> #include <QMainWindow> #include <QMessageBox> #include <QPushButton> #include <QScrollArea> #include <QSize> #include <QSizePolicy> #include <QSqlDatabase> #include <QSqlError> #include <QSqlQuery> #include <QStackedLayout> #include <QString> #include <QStringList> #include <QTextEdit> #include <QtCore> #include <QtGlobal> #include <QToolTip> #include <QTreeView> #include <QWidget> #include <QWindow> namespace Ui { class Additem; } class Additem : public QDialog { Q_OBJECT public: //db //Buttons QPushButton *SubmitButton = new QPushButton; QPushButton *Image_Button = new QPushButton("Browse"); QPushButton *Reset_Button = new QPushButton; //Functions void); void close(); void closeAdditem(); QString name; explicit Additem(QWidget *parent = 0); ~Additem(); private: Ui::Additem *ui; //Functions void Addcontent(); QString getDescription(); QString getHistory(); void prepAdd(); void clear(); //QLabel QLabel* title = new QLabel; QLabel *angry_image_Label = new QLabel; QLabel *incorrect_Label = new QLabel; QLabel *spacer1 = new QLabel; QLabel *Label_ID = new QLabel; QLabel *ID_Display = new QLabel; QLabel *Label_Name = new QLabel; QLabel *Label_What = new QLabel; QLabel *image_Label = new QLabel; QLabel *display_Label = new QLabel; QLabel *Material_Label = new QLabel; QLabel *color_Label = new QLabel; QLabel *descr_Label = new QLabel; QLabel *date_Label = new QLabel; QLabel *month_Label = new QLabel; QLabel *day_Label = new QLabel; QLabel *year_Label = new QLabel; QLabel *spacer2_Label = new QLabel; QLabel *spacer3_label = new QLabel; QLabel *signed_Label = new QLabel; QLabel *history_Label = new QLabel; QLabel *age_Label = new QLabel; QLabel *notes_Label = new QLabel; //Textedit QTextEdit * descr_TextEdit = new QTextEdit; QTextEdit *history_TextEdit = new QTextEdit; QTextEdit *notes_TextEdit = new QTextEdit; //Lineedit QLineEdit *age_LineEdit = new QLineEdit; //QCombobox QComboBox *What_Combo = new QComboBox(this); QComboBox *Material_Combo = new QComboBox; QComboBox *color_Combo = new QComboBox; QComboBox *month_Combo = new QComboBox; QComboBox *day_Combo = new QComboBox; QComboBox *year_Combo = new QComboBox; QComboBox *signedby_Combo = new QComboBox; //Layouts QHBoxLayout *name_Layout = new QHBoxLayout; QHBoxLayout *titleLayout = new QHBoxLayout; QHBoxLayout *button_Layout = new QHBoxLayout; QHBoxLayout *dateLayout = new QHBoxLayout; QVBoxLayout *mainLayout = new QVBoxLayout(this); QGridLayout *grid = new QGridLayout; //QString QString fileQstring = "C:/Programming/Projects/FolkFriends/db.db"; QSqlDatabase db; QString what; QString newWhat; QString fileName; QString material; QString newMaterial; QString color; QString newColor; QString description; QString si; QString sj; QString sk; QString month; QString day; QString year; QString Signedby; QString newSignedby; QString history; QString age; QString notes; QString submit_warning; QString warning_what; QString submitError; QString sID; QString sIDr; QStringList list; //Int int ItemID = 0; int LastID = 0; int revItemID = 0; int yeark; int r; //Other QLineEdit *LineEdit_Name = new QLineEdit; QByteArray inByteArray; QByteArray byteArray; private slots: void readAndValidate(); void addMonth(int); void resetAll(); void submit(); void getAge(); QString getNotes(); void addSignedby(int); void addDay(int); void addYear(int); void getMaterial(int); void processcombo(int); void addColor(int); QString findimage(); void closeAdd(); }; #endif // ADDITEM_H This is the way I open it from mainwindow.cpp: void MainWindow::on_actionAdd_Item_triggered() { Additem *mAddItem = new Additem; mAddItem->exec (); } I couldn't spot anything that would cause this behavior. My advice is to strip down the code to the bare basics, and then add things in stages to isolate what is causing the problem. @gabor53 please, check that your class is a real QDialog and not a inherited from a QWidget. If it is a QDialog well inherited, and the member function close() is not overrided, then I don't know any reasons for the dialogbox (if it is a real QDialog) not close. eventually, do paste your code: header and source files of this QDialog box who doesn't want to close. (use gist github or ix.io or wich you want for share this code, like that we can look at this closer) Thank you all. I started to rebuild things in hope that solves the problem.
https://forum.qt.io/topic/68201/qdialog-doesn-t-close/8
CC-MAIN-2019-51
refinedweb
2,508
56.35
A string is described as a collection of characters. String objects are immutable in Java, which means they can’t be modified once they’ve been created. A popular question asked in Java interviews is how to find the substring of a string. So, I’ll show you how Substring functions in Java. We’ll be covering the following topics in this tutorial: Substring in Java: What is a substring in Java? Programmers often need to retrieve an individual character or a group of characters (substring) from a string. For example, in a word processing program, a part of the string is copied or deleted. While charAt() method returns only a single character from a string, the substring() method in the String class can be used to obtain a substring from a string. Substring in Java: Different methods under substring There are two separate methods under the substring() method. They are as follows: • String substring(int begIndex) • String substring(int beginIndex, int endIndex) String substring(int beginIndex): This method returns a new string that is a substring of the invoking string. The substring returned contains a copy of characters beginning from the specified index beginIndex and extends to the end of the string. For example, Syntax: public String substring(int begIndex) Note: The index begins with ‘0,’ which corresponds to String’s first character. Let’s take a look at an example. s1.substring(3); // returns substring 'come' String substring (int beginIndex, int endIndex): This method is another version of previous method. It also returns a substring that begins at the specified begin Index and extends to the character at index endIndex-1. Syntax: public String substring(int begIndex, int endIndex) Let’s take a look at an example. s1.substring(3,6); // returns substring "come". public class SubstringMethods { public static void main(String[] args) { String s1 = "Welcome" ; System.out.println("s1.substring(3) = " + s1.substring(3)); System.out.println("substring(3,6) = " + s1.substring(3)); } } class StrSubString { public static void main(String args[]) { String k="Hello Dinesh"; String m=""; m = k.substring(6,12); System.out.println(m); } }
https://ecomputernotes.com/java/jarray/string-substring
CC-MAIN-2022-21
refinedweb
349
55.03
I got Alsa 0.9.3a finally working with kernel 2.2.24 with only " #include <asm/pgtable.h> " in highmem.h What other hacks are needed with kernel 2.4.20 to get it working on a ALPHA. My guesses are 1. The hack above 2. Something with this lines in adriver.h : #if LINUX_VERSION_CODE < KERNEL_VERSION(2, 5, 3) #define need_resched() (current->need_resched) #endif Because of this error : /lib/modules/2.4.20/build/include/linux/sched.h:946: arguments given to macro `need_resched' Commenting out or changing the KERNEL_VERSION to (2, 4, 3) works 3. Something to solve this error: memory_wrapper.c: In function `snd_compat_vmalloc_to_page': memory_wrapper.c:41: `mem_map' undeclared (first use in this function) memory_wrapper.c:41: (Each undeclared identifier is reported only once memory_wrapper.c:41: for each function it appears in.) memory_wrapper.c:28: warning: `page' might be used uninitialized in this function make[1]: *** [memory_wrapper.o] Error 1 make: *** [compile] Error 1 Probably can't he find the declaration of mem_map, but I can't find it either. Are the fist two hacks right or is there a better way and how to solve the third? I'm using a 164SX ALPHA with Debian Woody Wouter Rademaker ------------------------------------------ This message was sent by Scoutnet Webmail.
https://lists.debian.org/debian-alpha/2003/05/msg00050.html
CC-MAIN-2017-39
refinedweb
210
62.64
(12) Srihari Chinna(4) Sibeesh Venu(3) Kailash Chandra Behera(3) Vidya Vrat Agarwal(3) Destin joy(3) Jignesh Trivedi(3) Praveen Kumar Sreeram(2) Sachin Kalia(2) Anubhav Chaudhary(2) Kiranteja Jallepalli(2) Abhishek Jaiswal :)(2) Sourav Kayal(2) Ashish Shukla(2) Midhun T P(1) Rion Williams(1) Tapan Patel(1) Surya Kant(1) Aroh Shukla(1) Ganesh Sattawan(1) Debendra Dash(1) Sumit Jolly(1) Vignesh Ganesan(1) Suraj Pant(1) Santhakumar Munuswamy(1) Manoj Kalla(1) Jean Paul(1) Shridhar Sharma(1) Banketeshvar Narayan(1) Isaac Ojeda(1) Neeraj Kumar(1) Afzaal Ahmad Zeeshan(1) Gaurav Kumar Arora(1) Vithal Wadje(1) Prasham Sabadra(1) Pankaj Bajaj(1) Yogendra Kumar(1) Ketak Bhalsing(1) Shashank Srivastava(1) Jasminder Singh(1) Abhishek Singh(1) Ahmar Husain(1) Divya Sharma(1) Rinkal Brambhatt(1) Ritesh Sharma(1) Imran Ghani(1) Amit Choudhary(1) Saineshwar Bageri(1) Sagar Pardeshi(1) Nimit Joshi(1) Sourabh Somani(1) Lakshmanan Sethu(1) Azad Chouhan(1) Sharad Gupta(1) Neha (1) Shankar M(1) Abhimanyu K Vatsa(1) Prabhakar Maurya(1) Vishal Gilbile(1) Mahesh Alle(1) Gaurav Gupta(1) Gaurav Chauhan(1) Veena Sarda(1) Kumar Saurabh(1) Rajesh VS(1) Alessandro Del Sole(1) Mihir Pathak(1) Ganesh Nataraj(1) Chris Harrison(1) Resources No resource found Using Redis Cache In Web API Sep 21, 2016. In this article, we will learn, how to implement Redis Cache in Web API Cache Rules Everything Around Me - Using ASP.NET MVC6 TagHelpers to Bust Cache Aug 25, 2016. In this article you will learn how to use ASP.NET MVC6 TagHelpers to Bust Cache. Caching In WPF Aug 16, 2016. In this article, you will learn about caching in WPF.. How To Enable Local Cache In Azure App Service Jul 28, 2016. In this article, we will learn how to enable the Local Cache feature of Azure app Service. Implementing Caching In Web API Jul 25, 2016. In this article, you will learn how to implement caching in Web API. Azure App Service: Sessions Management in Load Balancing Environment Using Redis Cache Jun 12, 2016. In this article we will learn how sessions are maintained in a load balancing environment using Redis Cache. Voice of a Developer: Application Cache API - Part Twenty Six May 17, 2016. In this article you will learn about Application Cache API. This is part twenty six of the article series. Azure App Service Local Cache May 14, 2016. In this article, we will learn how to improve the performance of the web app by using Azure App Service Local Cache Explore Persistence Caching In Web API Using CacheCow.Server.EntityTagStore.SqlServer Apr 12, 2016. In this article you will learn how to explore Persistence Caching in Web API using CacheCow.Server.EntityTagStore.SqlServer. Apply Caching In Web API Using CacheCow Apr 09, 2016. In this article you will learn how to apply Caching in Web API using CacheCow. Caching In Web API Mar 23, 2016. In this article we will are going to learn how to use caching in Web API. Patching Distributed Cache in SharePoint Server 2013 Feb 28, 2016. In this article you will learn Patching Distributed Cache in SharePoint Server 2013. MVC Output Cache And Cache Profiling Feb 22, 2016. In this post we will discuss one of the easiest ways to cache an output in MVC and output cache attributes. .NET Decompiling Tools: Part 1 Jan 31, 2016. This article explains about decompiling tools for the .NET application and their features, how to use the tools to decompile assembly and assembly browser. $templateCache In AngularJS Jan 26, 2016. In this article I'll explain $templateCache in AngularJS. Private Public Assembly in .NET Jan 24, 2016. In this article you will learn about Private Public Assembly in .NET. SharePoint Page Performance Optimization Jan 22, 2016. In this article we can explore few options to improve SharePoint Page Performance.. Using REDIS Cache with C# Sep 01, 2015. In this article you will learn how to use REDIS Cache with C#. How to Create Excel File Using C# Jun 06, 2015. This article shows how to create an Excel file using the interop assembly Microsoft.Office.Interop.Excel.. Assemblies in C#: Part 2 May 27, 2015. This article explains assemblies in C# with an example. Assemblies in C# : Part 1 May 25, 2015. This article explains Assemblies in C#, a basic unit of application deployment and versioning. Getting Assembly Metadata at Runtime Mar 28, 2015. This article shows how to get assembly metadata at runtime. Explaining ViewState, Session and Caching in ASP.Net Mar 26, 2015. This article explains the 3s of ASP.NET; they are ViewState, Session and Caching. Directives in ASP.Net Web Pages Mar 22, 2015. In this article we will learn about the Directives of ASP.Net Web Pages. Dynamically Create Instance of a Class at Runtime in C# Mar 11, 2015. This article explains how to create an instance of a class using the class name at runtime. Adding A Custom Object To Distributed Cache Using AppFabric Client DLLs Feb 12, 2015. In this article you will learn how to add a Custom Object to a Distributed Cache using the AppFabric Client DLLs in SharePoint. .NET Application Domain Internals Feb 10, 2015. In this article, you‘ll drill deeper into the details of how an assembly is hosted by the CLR and come to understand the relationship between Application Domains (appdomain) and processes. Overview of Fakes Assembly Jan 07, 2015. This article provides an overview of Fakes Assembly. Caching in ASP.Net Dec 23, 2014. This article describes a few caching methods. Things to Consider When Designing .NET Applications Nov 24, 2014. Proper design is a major factor that contributes to the scalability and performance of any .NET application. MSIL Programming: Part 1 Nov 15, 2014. In this article you will learn that .NET assemblies contain an ultimate CIL code that is compiled to platform-specific instructions using JIT. Bypassing Obfuscation: Ciphered Code Reverse Engineering Nov 11, 2014. In this article, we have performed reverse engineering over a protected binary by deep analysis of both obfuscated source code and MSIL assembly code. Anti-Reverse Engineering (Assembly Obfuscation) Nov 10, 2014. The modus operandi of this paper is to demystify the .NET assembly obfuscation as a way to deter Reverse Engineering. Executing Assembly Code in C# Nov 10, 2014. This article explains how inline assembly programming by linking or invoking CPU-dependent native assembly 32-bit code to C# managed code. Disassembling With Reflector: Part 1 Nov 09, 2014. This article shows dissembling of the source code of an assembly using Reflector. Disassembling With Reflector: Part 2 Nov 09, 2014. This article shows how to reveal the license code information by dissembling its corresponding classes after backtracking the code flow execution. Working With Caching in C# Nov 08, 2014. This article introduces implementation of caching with the C# language. Native Assembly Programming in .NET Nov 07, 2014. In this article you will learn how to create both an EXE and a DLL file using the MASM programming language employing the Visual Studio IDE. Advanced .NET Assembly Internals: Part 2 Nov 07, 2014. This tutorial explains the difference between private and shared assemblies and see how private and shared are created. .NET Assembly Internals: Part 1 Nov 06, 2014. This tutorial drills down into the details of how the CLR resolves the location of externally referenced assemblies. Native Image Generation in Managed Code Nov 01, 2014. This article explains how to write and execute high-performance .NET managed code by employing the Native Image Generator utility as well as some of its disadvantages and recommended scenario guidelines for its use. Difference Between Session Application and Cache Oct 25, 2014. This article is related to the Session, Application and Cache, So without wasting much time let’s start our Topic. .NET Binary Reverse Engineering: Part 1 Oct 23, 2014. The prime objective of this article is to explain the .NET mother language called Common Instruction Language (CIL) that has laid the foundation of .NET. Understanding and Configuring the Distributed Cache in SharePoint 2013 Oct 15, 2014. We can have an introduction in configuring the Distributed Cache in SharePoint 2013 How to Configure Object Caching in SharePoint 2013 Oct 14, 2014. In this article we can see how to configure the Object Caching in SharePoint 2013. SharePoint Server 2013: "Sign in as Different User" Menu Option is Missing Oct 14, 2014. This article describes how to activate the Sign in as Different User option in SharePoint 2013 and if you want to activate it permanently how you can do that. How to Configure Page Output Caching in SharePoint 2013 Oct 13, 2014. This article describes how to configure Page Output Caching in SharePoint 2013. Various Cache Clearing Methodologies Oct 10, 2014. This article contains various sorts of cache clearing approaches, their pros and cons and browser cache clearing options. AngularJS - Templates and $templateCache Jul 16, 2014. This article explains templates and $templateCache in AngularJS. Friend Assembly Using C# Jul 12, 2014. This article explains how to to provide access to one class but not all classes. Reflection Concept and Late Binding in C# Jun 16, 2014. By this article I am trying to explain refection concept and the real time uses in projects. Query Notification in SQL Server Jun 11, 2014. This article explains QueryNotification and SQLCacheDependency in SQL Server. Introduction To Caching in ASP.Net Jun 08, 2014. In this article I will explain caching and it's types. Register Your Assembly in GAC Using Gacutil Exe May 17, 2014. Here you will learn how to add an assembly to the Global Assembly Cache (GAC). BLOB Caching in SharePoint 2013 May 16, 2014. This article is an introduction to SharePoint 2013 BLOB Caching and provides Best Practices of using BLOB Caching and how to enable BLOB Cache. This blog is for the ones who already know SharePoint. What is An Assembly Apr 21, 2014. An Assembly is a basic building block of .Net Framework applications. It is basically compiled code that can be executed by the CLR. ASP.Net Interview Questions For Beginners and Professionals Apr 14, 2014. Here in this article, we will cover ASP.NET Interview Questions related to Security, Cache management and so on. Microsoft Fakes; Testing the Untestable Code Apr 07, 2014. I have always been the great fan of TDD approach of coding. But recently, I've ran into case with a situation when the code was not Testable. Learn MVC Basics Mar 18, 2014. The Model View Controller (MVC) pattern is an architectural design principal that separates components of web applications. How to Clear Your SharePoint Designer 2010/2013 Cache Feb 25, 2014. This is a quick tutorial covering how to clear your SPD 2013 Cache. That is handy, especially when working with SPD 2010 and SPD 2013. Additional Information About ASP.Net Helios Feb 25, 2014. This article describes how to work with the AspNet.Loader.IIS assembly in the Helios Project without using the OWIN Host.. ASP.Net Page Directives Jul 11, 2013. As a ASP.NET developer everyone have to have knowledge about Page Directive. If you are a fresher and you want to know about the page directive then you can read this article Faster Temp Table Caching in SQL Server 2014 Jul 03, 2013. This article explains how to optimize temp table caching in SQL Server 2014. Data Caching in WPF Using C# Jun 17, 2013. This articles describes the caching in WPF using C# language . CLR Internals - Process and Application Domain May 28, 2013. In this article, you‘ll drill deeper into the details of how an assembly is hosted by the CLR and come to understand the relationship between application domain (appdomain) and processes. Using Reflection with C# .NET May 18, 2013. This article explains discovery of types at runtime using .NET Reflection. This article will also explain late binding that is very related to reflection. How to Set Caching Options For a Shared Drive/Folder by Using Windows Interface May 14, 2013. In this article you will learn how to set Caching Options for a shared drive/folder using a Windows Interface.. Using LINQ in .NET Mar 21, 2013. In this article we will discuss the LINQ approach introduced in .NET 3.5 for querying. Caching in ASP.NET Mar 20, 2013. In this article we will see how caching in ASP.Net is used to improve application performance. Output Caching in MVC Mar 15, 2013. In this article you will learn everything about "Output Caching in MVC". On this journey I will take you to some real examples to make your view crystal clear. Reset Windows Store Cache in Windows 8 Feb 26, 2013. In this article we explain how to reset the cache of Windows Store apps in Windows 8. Implementing Generics Classes and Functions In C# Programs Feb 13, 2013. Today we'll have a look at how we can implements generics in our program and how we can make our functions, code more robust and less prone to changes in program code. How to Register an Assembly or DLL in Web Applications Jan 18, 2013. In this article you will learn how to register an assembly or DLL in web applications. Clear Cache Using Start Screen Apps in Windows 8 Jan 03, 2013. In this article we will learn how to remove cache data from the live tiles and Desktop apps in Windows 8. How to Generate Pie Chart Report in JIRA Tool in Testing Dec 26, 2012. In this article we discuss how to generate a Pie Chart Report in JIRA Tool Distributed Cache Service in SharePoint 2013 Dec 02, 2012. In SharePoint 2013 there is a new service called Distributed Cache. The Distributed Cache service is built on Windows Server AppFabric, which implements the AppFabric Caching service. Output Caching : Post Cache Substitution Using Substitution Control Nov 17, 2012. In this article, I will be explaining the Post Cache Substitution technique using the Substitution control for Output Caching in ASP.NET. Builder Pattern in VB.NET Nov 10, 2012. Builder is an object creational design pattern that codifies the construction process outside of the actual steps that carries out the construction - thus allowing the construction process itself to be reused.. Reading Assembly attributes in VB.NET Nov 10, 2012. This article allows you to read the assembly attributes information using .NET. The information store in AssemblyInfo files like Title, Description, copyright, Trade mark can be read using reflection and assembly namespace. XML Web Service Caching Strategies in VB.NET Nov 10, 2012. We'll take a look in this article ways for application-level caching with ASP.NET, and will take a look at HTTP caching and its application for XML Web services.. About Assembly-Cache.
http://www.c-sharpcorner.com/tags/Assembly-Cache
CC-MAIN-2016-50
refinedweb
2,480
67.65
Life And Works Of Mary Shelley English Literature Essay It was not until 1920 that women received the right to vote. After many years of speaking out for equality, women finally achieved it. In the beginning of the 1800s, women realized that they too were people and wanted to have the same opportunities as men (Our 1). Most people do not realize how recent this actually happened. Throughout history, women had to fight to be heard and obtain the same rights as men. Mary Shelley did not have an easy time during her life and when her husband died, she was left to get things done on her own. She had strong beliefs about women and how they should be treated. She lived during the time when women began to speak out to receive same rights as men. She made it known in her novels that women were not the main part of society and were considered inferior to men. Mary Shelley’s views on feminism were based on her life alone and portrayed throughout her works. Mary Shelley’s mother died shortly after giving birth to Shelley and her dad did not pay much attention to her (Merriman 3). Her mother’s feminist writings greatly influenced her in her writings. She would read her mother’s writings and became deeply concentrated on them (Gilbert 116). By reading her mother wrote, she felt a connection with her. It was the closest thing she had to a mother. (Gilbert 116).Then, she would also read the reviews of her mother’s works and sometimes her mom was called a “monster” and “reading about her mother’s works must have been painful” (Gilbert 115). Her mom was a feminist writer which at the time was not looked upon highly. People did not realize at the time what her mom was doing in her writings and the connections to feminism she had. When they did realize, they did not like it especially the men. The men wanted to be the best and did not want women to be better than them. They did not believe women should have the same rights as men. Mary Shelley figured this out when Percy Shelley died. It was July 8, 1822 and Percy Shelley died leaving Mary Shelley and her children (Letters 109). Percy Shelley had become ill and could not do much while they were traveling in Europe (Letters 64). He never regained his health, and his illness escalated (Letters 84). Finally, on the July day he died, and it was devastating for Mary Shelley. She was left alone with no friends and her one companion was gone (Letters 110). Mary Shelley was left with her young child Percy, but he was going to be taken away from her because she was a woman and did not have rights to her son (WIC 12). Sir Timothy Shelley was going to give her son to someone that would take care of him. She did not want to be separated from him at all. She was his mother and would “not live ten days separated from him” (Letters 122). She fought for the right to keep her son and in the end was able to keep him and get the allowance she needed. Since she was a woman, she was not just given what she wanted. Because Timothy Shelley was a man, he had the right to do what he wanted and make the decisions. Mary Shelley had to fight for what was right and was not going to let someone take her son from her. Because she spoke up, she was able to receive what she wanted. Before Mary Shelley was able to take care of Percy Shelley and even after he died, she was able to be independent and get things done on her own. Having this sense of freedom helped her realize that women did not need men and could function on their own in society without anyone else. In Frankenstein, Mary Shelley expressed her views of loneliness and feminism through Frankenstein and the monster. Frankenstein, the doctor, was lonely and wanted a friend, someone who would love and listen to him. He built the monster but ran away because the monster was ugly and terrifying (Shelley 43). Because the monster was created from man, he was like Eve. Eve, in the Bible, was created from man by God. The monster was created by Frankenstein by body parts of man. God did not create women to be discriminated against like Frankenstein did not make the monster to be rejected from society. They were created to be companions but just because they looked different, they were not accepted easily in society. The monster was left alone and was not taught how to function in society (Gilbert 120). Nobody wanted to get near him because of his appearance and he had nowhere to go ( Shelley Every time he tried to talk to someone, the person would become afraid and the monster would kill them (Shelley ch 16). He did not do this on purpose, but he did not know any different. He tried so hard to make friends, but everyone would run away from him. He had to figure out everything by himself and became a female figure in society. Women were left alone in those days while their husbands were at work (WIC 6). They had to figure everything out on their own like the monster. They were left home to take care of the children and do the chores to keep the home running and in good order (WIC 6). Mary Shelley was left alone when she was younger. Her mom died when she was a young age and her dad rejected her (Merriman 3). She was left alone having to figure things out on her own. She became independent and able to function by herself as a strong woman. Because of the feminism portrayed in Frankenstein, it was considered a “woman’s book” and revealed a great deal about women during the time and how they were treated. (Gilbert 116). The monster tried to fit in but was looked down upon. This was like women. They tried and tried to become known in society, but were looked at as inferior to men. Women were not heard or treated how they should have been. The monster was not treated like he should have been because he did not look like everyone else. He was discriminated against just because he was different looking. Shelley emphasized that the monster was no different than anyone else, but the way he looked caused him to become a killer because he was not treated right. On the outside, he was ugly and frightening, but on the inside, he just wanted to be loved and make a friend ( Shelley ) He would go up to people but they would run away before even getting to talk to him ( SHelley This would cause him to become more lonely and unloved. The people did not try to get to know or understand the monster. They took one look at him and ran. No one gave him a chance to show what kind of person he really was. He was not able to fit in properly but when not looking at his appearance, he was treated like a normal person. For instance, the monster saw a house and tried so hard to go in and talk to the people (Shelley 79). He liked the family very much and even helped them by bringing firewood to their house. The older man was blind so the monster knew that if he could just talk to him, maybe the man would be kind to him. It took a while, but finally when the other family members were away, the monster went inside. He started talking to the blind man and the man was treating him like a normal person. They talked and talked and everything was fine until the other people came home. They were terrified and made the monster leave. The old man could not tell what the monster was and thought he was just a normal being. The monster acted normal and did not try to hurt anyone. When the other people came home though, they took one look at the monster and figured he was bad and going to kill people. They did not get to know the monster or talk to him (Shelley 97-99). The monster talked to the old man like any other person would. The man was blind and could not tell that he was talking to a monster. He was not able to judge him by his appearance but had to judge him by his personality and character. The old man did not run away like everyone else but actually talked to and got to know the monster as much as possible ( Shelley) He had no idea the monster was scary and ugly looking. He thought that the monster was a normal person because he acted normal. He was even able to give the monster advice. By not actually seeing what the monster looked like, he was able to get to know the monster and understand what was going on. The monster was able to tell the man his problems and what was going on. Sometimes a person just needs someone to vent to whether or not the other person can help. Like the monster, women were also judged by their appearance. Men thought that women could not do as well in society as them. They thought that women were not as intelligent, talented, and could not have a job (WIC 6). Women were just house wives who sat home and cooked. Just because they looked different on the outside, they were treated remarkably different. Women had to fight to be heard like the monster had to fight to fit in (WIC 49). When Mary Shelley wanted the right to keep her son, she did not just sit back and let him be taken away. She was able to speak up and get her son back. Women wanted to be able to work the same jobs as men and have the same respect as them. This did not work out well for the monster though and whenever he tried to talk to someone, he would end up killing them. No one had respect for him because no one understood what happened and where he came from. Shelley makes the point through the monster, that women needed more respect and responsibility then they had. This would cause women to eventually fight for equal treatment (WIC 1). Mary Shelley thought she was a failure at doing her job as a woman in society. Mary Shelley felt like she had caused “her mother’s death as well as for failing as a parent” (Last 762). Her mother died after childbirth so Mary was left with her father (Last 731). Then, her first child died after being born prematurely (Last 731). She considered herself a failure. She herself, though did not do anything wrong, but the events in her life made her miserable. This is compared to Frankenstein who also did not do well as a parent (Last 761). Right after making the monster, he ran away. He was so scared of what he had made that he left (Shelley 43). He did not teach the monster anything about the world but just left him with no guidance. Not only is Frankenstein about Mary Shelley’s loneliness and feminist views, but also is Transformation. Guido spent his father’s wealth and was left alone. He had no friends or anyone who loved him (Transformation 11). He went to marry his girl, Juliet, but ran into problems. He came upon a wretch who wanted to talk to him. The wretch was not pleasant looking and Guido was worried. The wretch made Guido a deal that if he would switch bodies for three days, the wretch would give him his chest. Guido did not know that the wretch would not be back to switch bodies. The wretch went off to Juliet with Guido’s body, not to return (Transformation 17). With the wretch’s body, Guido was not invited back into society. He was considered “the monstrous dwarf”, and he was scared that he would be stoned or hurt by the people (Transformation 24). He was judged by his awful outer appearance, but was the same person on the inside. The people did not care, and he was considered an ugly monster. They did not try to get to know or understand him. This was like the monster in Frankenstein who was judged by his appearance rather than his personality on the inside. Guido had been a normal person before and still was a normal person but his appearance was different. People did not care who was on the inside but were scared about who was on the outside. Transformation reflected Shelley’s feminist beliefs. Again, the fact that women were not the always loved by society was seen. Like Guido’s inability to be let into society, women also had a hard time being taken into society. They were not treated the same and did not have the same rights as men. Women had to fight to fit in and have a say in things just like Guido had to fight to be heard and believed. He looks different which had an impact on his entering back into society and being treated like a normal person. Just because women looked different than men did not mean they should be treated differently. It also portrayed her loneliness because the reason the man was turned into an ugly monster is he was lonely. He did not have any friends or companions and was an outcast in society. After spending his father’s money he had nothing left. Mary was also lonely during her life. She had Percy for a short time, but before and after that, she did not have anyone to talk to and be her friend. She would write letters to her friends but letters were not humans and took time to send and receive. The Mortal Immortal, part of Transformation was about a man who was in love with his friend Bertha (Transformation 32). The only problem was he was poor and Bertha was rich. He was not wealthy enough to get married and Bertha was becoming impatient (Transformation 33). He wanted her to love him so much that he drank his friend Cornelius’ potion without him knowing. This potion would “cure [him] of love-of torture!” (Transformation 36). After drinking the potion, he was a changed man and married Bertha (Transformation 39). Unknowingly, by drinking the potion, he had become immortal. This became a problem as he and Bertha became older because he did not age. As Bertha would age, he would not. After a while, the people of the town started noticing the age difference. They had to leave the town and eventually Bertha died. He was left all alone. All the man wanted was to be loved and accepted. Since he was poor, he was not able to get married and was looked down upon. The people did not care who he was but knew that he was poor. The people higher up in society did not want to move down in rank but up. He was not considered important. Bertha was portrayed as higher up in society. Women needed to be independent and not have everyone decided things for them. Bertha had control and her being higher in ranking showed that Mary Shelley believed that women could be important. Bertha did not want to marry someone with no money that could not support a family. She wanted a brought up, well respected man that would take care of her. Even though she loved the man, it was not enough to get married. She needed to know things would be secure. She was in control of what she wanted. Mary Shelley expressed that women could be ranked higher than men and should be treated like them. The Last man, written by Mary Shelley also incorporates her feminist beliefs. In the Last Man, Lionel and his sister were orphaned after their father died. They had nowhere to go and no one to take care of them. The King eventually died, and the country became a republic. The countess though tries to raise her son as the next king but he does not want to be king. Adrian leaves and runs into Lionel who at first did not like Adrian. Things changed and they realized they had things in common and could be friends. They went back to the town and realized that everything was a mess. Later on, there was a plague that was killing the people. England was thought of as being in the only safe place but the people were wrong. There were people dying everywhere Adrian steps up as a leader and is able to keep things under control in England. The few survivors leave England but die in a storm. Lionel is the last one left, the last man ( Last). The Last Man, was a man and so was a king. Men were the ones who ruled the country. Like her other novels, women were not the main characters. The king illustrated that men could only have higher power. Women could not have a say in things that happened in a country and could certainly not be rulers. That would be preposterous. Having the main characters be men and having a king not a queen demonstrates that through the time, women were not thought of as good enough for jobs that only the men should have. The women were not as intelligent and were not as respected as a man would be in power. Like in Mary Shelley’s life and the monster in Frankenstein, the kids had to grow up and learn by themselves. In the Last Man, this caused them to become uncivilized. They did not know what was normal and not and did not have anyone telling them how to live their life. Because their parents died, they had to figure things out by themselves. This was life Mary Shelley, who when her mom died, was left with only her father who did not really care about her. She became independent at a young age. The monster in Frankenstein and Guido in Transformation also were left alone. The loneliness actually caused them to in the end though, become stronger and more independent people. It went back to her failing as a parent. The parents in her novels also failed, and it revealed that she was not content with what had happened. She was able to let it out in her novels and show the people what she went through and how hard it was. Being left alone at a young age had a vast impact on Mary Shelley. She wrote about it in all her novels and expressed her feelings through her writings. Because one or both of the parents were gone in her books revealed that she was hurt and wanted more security. Not having a mother figure is hard on a young child. The child does not really have anybody close to talk to that relates to them. Dads can try but it is not the same. She needed her mom to support her and be an inspiration. She did not have a mom as a role model to follow in her footsteps. Also, in Mary Shelley’s novels, the women were not the main and significant characters. The women were part of the background and not important in the story. In Frankenstein, the two women Elizabeth Lavenza and Justine Moritz were both once orphans that were adopted by the Frankenstein’s family. Elizabeth just waited for Victor to notice her and Justine was accused of murdering William who was actually murdered by the monster (Shelley 62). Also, in Transformation and the Last Man the main characters were the men. In Transformation, Guido changed to get his girl and in the Last Man Lionel did not want to be king. Mary Shelley makes a point that the women did not do much in society at the time. They were just meant to do the jobs that men did not want to do and keep the families together. They did not have big roles in helping the society and were not that important. The main characters were not women because women were not the scientists or rulers of the countries. Furthermore, she made connected the point her novels that people were judged by their outward appearance. In each novel, the characters are judged by their appearance or wealth. The monster was ugly and scary to he was labeled as dangerous automatically. When Guido turned into the ugly wretch, he is not let back into society. He is still the same person though. The man in the Mortal Immortal was not accepted because of his money. These characters are all judged because they were not exactly how society wanted them. Either is Mary Shelley. Being a women writer, she is not the most loved. Most people like men writers and just because she was a woman, she is treated differently. All of the characters are normal and friendly but just appeared differently and did not have what the people wanted them to have. Mary Shelley wanted to fit it, but since she was a woman, she had to work much harder to be heard and become a part of society. After Percy Shelley died, she is left alone and did not have a man to help her become known. She needs to do things on her own. Mary Shelley’s views on feminism and her loneliness influence her writing. She is able to incorporate these views and become a well-known author. She believed that women could be just as great as men and should not be held back just because they are women. There is no difference and women should be treated the same as men. She is a strong woman whose beliefs did not set her back and she did not let all of her hard times get to her. Request Removal If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal: Request the removal of this essay
http://www.ukessays.com/essays/english-literature/life-and-works-of-mary-shelley-english-literature-essay.php
CC-MAIN-2015-06
refinedweb
3,768
81.02
And I was suggesting something along these lines would be generally useful (if not pointlessly easy): class Set(set): __slots__ = () def added(self, value): super(Set, self).add(value) return value def unique(items): seen = Set() return (seen.added(item) for item in items if item not in seen) On Mar 28, 2013, at 8:15 AM, MRAB <python at mrabarnett.plus.com> wrote: > On 28/03/2013 05:28, Bruce Leban wrote: >> >> On Wed, Mar 27, 2013 at 10:11 PM, Shane Green <shane at umbrellacode.com >> <mailto:shane at umbrellacode.com>> wrote: >> >> [seen.added(value) for value in sequence if value not in seen] * >> >> >> Here's an easy way to do it: >> >> >>> seen = set() >> >>> seq = [3,2,1,2,3,4,5,4] >> >>> [seen.add(v) or v for v in seq if v not in seen] >> [3, 2, 1, 4, 5] >> >>> seen >> {1, 2, 3, 4, 5} >> > I think I would prefer a "unique" function that yields unique items: > > def unique(items): > seen = set() > > for item in items: > if item not in seen: > seen.add(item) > yield item > > >>> seq = [3,2,1,2,3,4,5,4] > >>> list(unique(seq)) > [3, 2, 1, 4, 5] > > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/python-ideas/2013-March/020099.html
CC-MAIN-2016-50
refinedweb
216
79.5
" %> Hi - Struts Hi Hi friends, must for struts in mysql or not necessary...:// Thanks. Hi Soniya, We can use oracle too in struts... know it is possible to run struts using oracle10g....please reply me fast its hi of this Hi Friend, This is the wrong way of declaring an array. Array...[]={1,2,3,4,5}; For more information,visit the following link: Declaring An Array Thanks thank u, If i am using hibernet with struts then require... more information,tutorials and examples on Struts with Hibernate visit... of this installation Hi friend, Hibernate is Object-Oriented mapping tool, I am installed tomcat5.5 and open... also its very urgent Hi Soniya, I am sending you a link. I hope that, your problem will be solved. Please visit for more information Hibernate giving exception - struts - MySQL - Hibernate Hibernate giving exception - struts - MySQL Hi all, My name..., and using Struts frame work with hibernate and MySQL database. I am getting exection...("*************************************"); } its giving error and execution help to load information in init of application - Struts to store that information for further use. Thanking u...help to load information in init of application Thanks for ur... question related to struts2 framework I want to keep some data in init give information or website address give information or website address hi i want u r help .... i m finding code for constructing graph in java but during execting them i have problem bec of packages... has i proceed i came to know Hi .Again me.. - Java Beginners Hi .Again me.. Hi Friend...... can u pls send me some code...... REsponse me.. Hi friend, import java.io.*; import java.awt....); } } -------------------------------------------------------- Visit for more information. can u plz try this program - Java Beginners can u plz try this program Write a small record management... operation. Thanks in advance Hi friend, form code ------------------ User information information to venkatesh.sirikonda1@gmai.com thank u sir Hi Hi How to implement I18N concept in struts 1.3? Please reply to Hi Hi Hi All, I am new to roseindia. I want to learn struts. I do not know anything in struts. What exactly is struts and where do we use it. Please help me. Thanks in advance. Regards, Deepak E-R diagram E-R diagram Hi,Hw to do draw E-R diagram for online movie ticket booking Struts Articles . The first thing we want to do is set up the Struts... popular Struts features, such as the Validation Plug-In and Tiles Plug... portal is in leveraging Struts Tiles. Portals are, in essence, a set hi! hi! public NewJFrame() { initComponents(); try { Class.forName("java.sql.Driver"); con=DriverManager.getConnection("jdbc...... jbutton1 s d clear button.. and jbutton2 s d savebutton.. if u find any develop a online bit by bit examination process as part of my project in this i am stuck at how to store multiple choice questions... information, visit the following links: VoIP Keep Your Number VoIP Keep Your Number VoIP Keep Your Existing Phone Number Changing your phone service to Lingo VoIP phone service is easy, even better since you can keep hi - IDE Questions hi hi, we have 2 IDE controller u r first one is not boot r not show so,second HD is working r if we have 2 struts-config files - Struts in u r project. and this xml file in deployed in web-inf folder(same as struts... information. Thanks...if we have 2 struts-config files if we have declared 2 struts Pop Up window - WebSevices Pop Up window How refresh pop up window,Tell me correct code for this problem. Hi friend, I am sending a link. This link will help you. Visit for more information. hi - Ajax ) var url = "../Data?country= "+source + "&r=" + new Date().getTime() ; alert(url... me know what kind of error u getting in Ajax. if possible come online through CROSSFIRE O/R CROSSFIRE O/R ?CROSSFIRE O/R? is a product to generate Java Program instead... development speed very fast. * Simple O/R Mapping tool based on JDBC. The generated r Struts Architecture - Struts Struts Architecture Hi Friends, Can u give clear struts architecture with flow. Hi friend, Struts is an open source... information on struts visit to : tiles - Struts Struts Tiles I need an example of Struts Tiles Pop-up Window Pop-up Window Hi......... please tell me How do I pass parameters to a pop-up window? Thanks For example: code for pop-up window...; personalInformationWindow.title="Personal Information Window"; PopUpManager.centerPopUp Follow-up - tutorial Follow-up 2001-03-15 The Java Specialists' Newsletter [Issue 013b] - Follow-up Author: Dr. Heinz M. Kabutz If you are reading this, and have... subscribe either via email or RSS. Hi again, Imagine my horror can u plz explain the http request methods - JSP-Servlet can u plz explain the http request methods can u plz explain http...? Hi friend, public interface HttpServletRequest extends ServletRequest Extends the ServletRequest interface to provide request information struts struts hi Before asking question, i would like to thank you... technologies like servlets, jsp,and struts. i am doing one struts application where i... date integer double type and perform validations thank U Please visit Setting up development Environment for spring Setting up development Environment for spring Hi, Am learning... to setting up development environment for that.. It directly skipped into the example... Can u Pls tell me how to set up the development environment for spring java struts error - Struts java struts error my jsp page is post the problem... THINK EVERY THING IS RIGHT BUT THE ERROR IS COMING I TRIED BY GIVING INPUT=/VENU9/INDEX.JSP AND Hi friend, Please specify Console visually edit Struts, Tiles and Validator configuration files. The Struts Console... Struts Console The Struts Console is a FREE standalone Java Swing hi plzz reply hi plzz reply in our program of Java where we r using the concept... in language of coding we r not using abstraction this will be used only for making ideas what r the necessary things would be for application it is ok or anything else hi all - Java Beginners hi all hi, i need interview questions of the java asap can u please sendme to my mail Hi, Hope you didnt have this eBook. You.../Good_java_j2ee_interview_questions.html?s=1 Regards, Prasanth HI struts - Struts struts how the database connection is establised in struts while using ant and eclipse? Hi friend, Read for more information. Thanks Struts - Struts Struts Can u giva explanation of Struts with annotation withy an example? Hi friend, For solving the problem visit to : Thanks java - Struts java what do u mean by rendering a response in Struts2.0 Keep servlet session alive - Development process Keep servlet session alive Hi, I am developing an application in java swing and servlet. Database is kept on server. I am using HttpClient for swing servlet communication. I want to send heartbeat message from client Friend... - Java Beginners ; import java.io.println; OR Should I give " import java.io.*; " Can u plz Explain this...Thank u.. Sakthi Hi friend, Java IO...hi Friend... Hi friend... I have to import... for more information. Thanks... Is Action class is thread safe in struts? if yes, how... explaination and example? thanks in advance. Hi Friend, It is not thread... variables. For more information, visit the following link: http Struts - Struts . Struts1/Struts2 For more information on struts visit to : Hello I like to make a registration form in struts inwhich... compelete code. thanks Hi friend, Please give details with full Struts - Framework struts application ? Before that what kind of things necessary to learn and can u tell me clearly sir/madam? Hi friend, Struts : Struts Frame work is the implementation of Model-View-Controller Struts - Struts Struts hi can anyone tell me how can i implement session tracking in struts? please it,s urgent........... session tracking? you mean... one otherwise returns existing one.then u can put any object value in session Struts - Struts . Thanks in advance Hi friend, Please give full details with source code to solve the problem. For read more information on Struts visit...Struts Hello I have 2 java pages and 2 jsp pages in struts java - Struts ,struts-config.xml,web.xml,login form ,success and failure page also... code is given below;plse...slove my problem....ok..thanks If possible ple send u r...java hi.., i wrote login page with hardcodeted username struts - Struts struts how to handle multiple submit buttons in a single jsp page of a struts application Hi friend, Code to help in solving... super.execute(); } } For more information on struts2 visit to : http struts - Struts struts what is the use of debug 2 Hi Friend... class.It is having three values: 0- It does not give any debug information. 1- It gives partial debug information. 2- It gives full debug information "); pw.println(e); } } } Hi Friend, For read more information,Tutorials and Examples on Struts visit to : http... in struts 1.1 What changes should I make for this?also write struts-config.xml struts - Struts . ----------------------------------------------- Read for more information. What is model View Controller architecture in a web application,and why would you use it? Hi mamatha, The main aim of the MVC implementing DAO - Struts implementing DAO Hi Java gurus I am pure beginner in java, and have to catch up with all java's complicated theories in a month for exam. Now... appreciated! Thanks virgo25 Hi friend, Java DAO Implementation servlet action not available - Struts servlet action not available hi i am new to struts and i am... If u r using the Tomcat server type the te.do from ur home directory in web.... Struts Blank Application action org.apache.struts.action.ActionServlet hi its the coding of create layout of chess in applet i hope u like it Hello - Struts Hello Hi friends, I ask some question please read... of project....open pop-up menu and access all page store in the database... but how is possible in java Hi friend, The standard Java struts validations - Struts struts validations hi friends i an getting an error in tomcat while running the application in struts validations the error in server... --------------------------------- Visit for more information. code and specification u asked - Java Beginners code and specification u asked you asked me to send the requirements...(); display.dispose(); } } so i have sent u the code and specification...i think this will help u to solve my problem....expecting the solution Java Struts - Hibernate . thanks in advance. Hi friend, Read for more information...Java Struts I am trying to do a completed project so that I can keep it in my resume. So can anyone please send me a completed project in struts best Struts material - Struts best Struts material hi , I just want to learn basic Struts.Please send me the best link to learn struts concepts Hi Manju, Read for more and more information with example at: http struts internationalisation - Struts struts internationalisation hi friends i am doing struts iinternationalistaion in the site... code to solve the problem : For more information on struts visit Struts 1.2 Vs Struts 2.0 - Struts Struts 1.2 Vs Struts 2.0 Hi, Can u give difference between struts 1.2 and struts 2.0 . Thanks prakash Hi Friend, Please visit the following link: Latest Version of Struts Framework use. If you are working on the Struts based project then you should keep...Complete detail and tutorials about the Latest Version of Struts Framework In this page we are listing the Latest Version of Struts Framework which is : Doubts on Struts 1.2 - Struts Doubts on Struts 1.2 Hi, I am working in Struts 1.2. My requirement... visit for more information. Thanks... anyone suggest me how to proceed... Thanx in advance Hi friend
http://roseindia.net/tutorialhelp/comment/78
CC-MAIN-2013-48
refinedweb
1,974
67.65
Libraries This application note includes information on using libraries effectively in Particle projects. About libraries Libraries are packages of code to add functionality to your application firmware. Many peripheral devices like sensors, displays, etc. include a library to access the peripheral. Most libraries are community maintained. Finding libraries The web-based library search tool is often the best option: Search and browse libraries However, you can also use: The Web IDE libraries icon. The Particle CLI library search. If you are using a Sparkfun Qwiic sensor (I2C), there is a list of sensors and their libraries in the Qwiic reference. If you have the source to a library, see Using non-libraries and Using GitHub libraries, below, for using it with Particle Workbench. Libraries and Workbench Workbench project structure Workbench projects start out with this structure: project.properties src/ discombobulator.cpp Your application source resides in src as .ino, .cpp, and .h files. In this example, we have the made-up application file discombobulator.cpp. Particle: Install Library To install a library, you typically use the Command Palette (Ctrl-Shift-P on Windows and Linux and Command-Shift-P on the Mac) and select Particle: Install Library and enter the library name. In this example, the CellularHelper library has been added. This will update two things in your project: lib/ CellularHelper/ examples/ library.properties LICENSE README.md src/ CellularHelper.cpp CellularHelper.h project.properties src/ discombobulator.cpp The project.properties file is updated with the library that you installed. For example: name=discombobulator dependencies.CellularHelper=0.2.5 This states that the project uses the library CellularHelper version 0.2.5. The other thing it does is make a local copy of the library in the lib directory. This is handy because you can view the README file, as well as browse the library source easily. Cloud vs. local compiles Libraries work slightly differently for local vs. cloud compiles, which can cause some confusion. For cloud compiles, libraries in project.properties are used even if there is a local copy downloaded in your project. This also applies to using the Particle CLI particle compile command. For local compiles, you must have a local copy of the library in the lib directory. This is done automatically by Particle: Install Library in the command palette, or by using particle library copy from the CLI. Customizing libraries If you are modifying a community library that you previously installed using Particle: Install Library in the command palette, or by using particle library copy from the CLI, you should remove the dependencies entry in project.properties in the dependency section. If you do not do this, cloud compiles will pick up the official version instead of your modified version. Even if you normally use local compiles, it's good practice to do this to prevent future confusion. Most libraries have open source licenses that allow you to use a modified version in your project, however see Libraries and software licenses, below, for more information. Using private libraries Most libraries are public, which is to say any user can use and download the library source. It is possible to make a private library by doing the particle library upload step and not doing particle library publish. However, this library will be private to the account that uploaded it. There is no feature for sharing a library with a team, product, or organization. However, in Workbench there are other techniques that may be helpful in the next sections. Using non-libraries When using Workbench the lib directory can contain things that are not actually Particle libraries, as long as they have the correct directory structure: lib/ SharedUtilities/ src/ SharedUtilities.cpp SharedUtilities.h project.properties src/ discombobulator.cpp In this example, there is a "non-library" called SharedUtilities in the lib directory. That further contains a src directory, which contains the actual library source. The fake library can contain multiple files, but it should only contain .cpp and .h files. It should not have .ino files! Using GitHub libraries You can commit your entire project to GitHub (private or public), including the lib directory. You can reduce code duplication and make updates easier by using Git submodules. cd lib git submodule add In this example, instead of cloning the repository, we use git submodule add. This makes a copy of it locally, however when you commit your project to GitHub, it only contains a reference to the external project, not the contents of it. If you've used Tracker Edge firmware, you've probably noticed that when you clone Tracker Edge you need to run the following command: git submodule update --init --recursive This is what retrieves all of the submodules in its lib directory. This technique is great for working on shared code across teams and projects. You have the full power of GitHub teams and organizations to safely and securely manage access to your code. Submodules can also be used with a fork of a repository. This allows you to easily modify an existing GitHub-based library in a fork and merge an updated original version with your changes. See also Working with GitHub for more tips for using it with Workbench. Upgrading libraries Particle libraries are not automatically updated when new versions come out. The easiest way to update is to delete the line for the library you want to update from project.properties and then Particle: Install Library to update the project.properties file and copy a local version. If you are are using GitHub to manage libraries using submodules, to get the latest main (or master), you use: git submodule update --remote Creating public libraries To create a new library, see contributing libraries. If you are using Workbench, there are a few special technique that are required. See Developing libraries in Workbench for more information. Library naming You should only use letters A-Z and a-z, numbers, underscore, and dash. You cannot use spaces or accented characters in library names! Case is preserved, but when looking up library names, the search is case insensitive. Library names are globally unique, even for private libraries. This sometimes causes confusion if you try to upload a new library and it fails with a permission error. Even if a library search does not show anyone using that name, if someone else has uploaded a private library with that name, you will not be able to use it. Porting Arduino libraries Set up file structure Most Arduino libraries already have the correct structure, but if not you will need to move files around to make: examples/ library.properties src/ Additionally: - The srcdirectory should contain .cpp and .h files. - The examples directory should contain zero or more example projects, with each example in a separate folder in examples. - Example projects can only be one level deep. If there is a directory in examples with more examples, you'll need to flatten out the directory structure. - Example source can have a single inofile in each example project directory, or they can use .cppfiles. Edit library.properties Most Arduino libraries should already have a library.properties file, but if not, you will need to create one. Note that the name (library name) in library.properties must match the directory name of the library. This is not a requirement for Arduino libraries, and some libraries may have a descriptive name (with spaces) in this field, and you must edit this to match. Fix compile errors Some libraries are easier to port than others. Many will require no modifications at all. Some common problems: - Unnecessary includes. Things like #include "Wire.h"are not used on the Particle platform and can be removed. - Naming conflicts. Occasionally libraries will use variables that conflict with names that are not used in Arduino, but may be used on the Particle platform. - If the library has large amounts of test code or code for other platforms, you may need to remove it. Otherwise it may be included in the uploaded library, and very large libraries will not load in the Web IDE. Making modifications for inclusion in the original source Sometimes you'll make changes to the original library and publish it. Other times, you may want your changes incorporated in the original library, typically by using a GitHub pull request. The most common way is to isolate any Particle-specific code in a #ifdef or #ifndef. #ifdef PARTICLE // Particle-specific code #endif Libraries and software licenses There is no standard for software licenses for library code, and it is up to the library creator to assign one. Most libraries have a LICENSE file, or include the license information in the README or in the source code files. With proprietary projects If your application is proprietary, you must make sure that any libraries you use have a permissive license. This allows proprietization, even if the library is open source. Common permissive licenses includes: - BSD (2 or 3-clause) - MIT - Apache - Public Domain - CC0 (Creative Commons, Level 0) In particular, GPL and LGPL libraries cannot be used in proprietary user applications! This is even true for LGPL because of the dynamic linking rule. Since Particle libraries are statically linked to the user application, the allowance for LGPL libraries to be used in dynamically linked proprietary applications does not apply. With open source projects You can generally use any of the popular licenses in open source projects. Note, however, that if you use a library that has a copyleft license, such as GPL or CC-BY-SA, then your application must generally have a similar copyleft license. However, if you use a library with a permissive license such as MIT, you are free to release your application with permissive licenses (such as MIT, Apache, or BSD), or a copyleft license (such as GPL). Though rare, a library with a JRL (Java Research License), AFPL (Aladdin Free Public License), or CC-BY-NC license cannot be used in a commercial product, even if open source. Additionally, there may be a requirement to for attribution for CC-BY and some other licenses. Less common scenarios Libraries with a static library It is not currently possible to create a Particle library that includes a static library of proprietary code. For example, the Bosch Sensortec BSEC library for the BME680 is not open source, but rather a closed-source library .a file that can be linked with an application. There is currently no way to include this in a cloud compile.
https://docs.particle.io/firmware/best-practices/libraries/
CC-MAIN-2022-27
refinedweb
1,748
55.74
C Interview Questions and Answers Difficulty Level: AllBeginnerIntermediateExperienced/Expert Ques 2. How do you find out if a linked-list has an end? (i.e. the list is not a cycle) Ques 3. What is the difference between realloc() and free()? Ques 4. What is function overloading and operator overloading? Ques 5. What is the difference between declaration and definition? Ques 6. What are the advantages of inheritance? Ques 7. How do you write a function that can reverse a linked-list? Ques 8. What do you mean by inline function? Ques 9. Write a program that ask for user input from 5 to 9 then calculate the average Ques 10. Write a short code using C++ to print out all odd number from 1 to 100 using a for loop Ques 11. What is public, protected, private? Ques 12. Tell how to check whether a linked list is circular. Ques 13. OK, why does this work? Ques 14. What is virtual constructors/destructors? Ques 15. Virtual constructor: Constructors cannot be virtual. Declaring a constructor as a virtual function is a syntax error. Does c++ support multilevel and multiple inheritance? Ques 16. What are the advantages of inheritance? Ques 17. What is the difference between declaration and definition? Ques 18. What is the difference between an ARRAY and a LIST? Ques 19. Does c++ support multilevel and multiple inheritance? Ques 20. What is a template? Ques 21. Define a constructor - What it is and how it might be called Ques 22. What is the difference between class and structure? Ques 24. What is encapsulation? Ques 25. Explain term POLIMORPHISM and give an example using eg. SHAPE object: If I have a base class SHAPE, how would I define DRAW methods for two objects CIRCLE and SQUARE Ques 26. What is an object? Ques 27. How can you tell what shell you are running on UNIX system? Ques 28. What do you mean by inheritance? Ques 29. Describe PRIVATE, PROTECTED and PUBLIC ? the differences and give examples. Ques 30. What is namespace? Ques 31. What is a COPY CONSTRUCTOR and when is it called? Ques 32. What is Boyce Codd Normal form? Ques 33. What is virtual class and friend class? Ques 34. What is the word you will use when defining a function in base class to allow this function to be a polimorphic function? Ques 35. What do you mean by binding of data and functions? Ques 36. What are 2 ways of exporting a function from a DLL? Ques 37. What is the difference between an object and a class? Ques 38. Suppose that data is an array of 1000 integers. Write a single function call that will sort the 100 elements data [222] through data [321]. Ques 39. What is a class? Ques 40. What is friend function? Ques 41. Which recursive sorting technique always makes recursive calls to sort subarrays that are about half size of the original array? Ques 42. What is abstraction? Ques 43. What are virtual functions? Ques 44. What is the difference between an external iterator and an internal iterator? Describe an advantage of an external iterator. Ques 45. What is a scope resolution operator? Ques 46. What do you mean by pure virtual functions? Ques 47. What is polymorphism? Explain with an example? Ques 48. What?s the output of the following program? Why? Ques 49. Why are arrays usually processed with for loop? Ques 50. What is an HTML tag? Ques 51. Explain which of the following declarations will compile and what will be constant - a pointer or the value pointed at: * const char * * char const * * char * const Ques 52. You?re given a simple code for the class Bank Customer. Write the following functions: * Copy constructor * = operator overload * == operator overload * + operator overload (customers? balances should be added up, as an example of joint account between husband and wife) Ques 53. What problems might the following macro bring to the application? Ques 54. Anything wrong with this code? Ques 55. Anything wrong with this code? T *p = 0; delete p; Ques 56. How do you decide which integer type to use? Ques 57. What does extern mean in a function declaration? Ques 58. What can I safely assume about the initial values of variables which are not explicitly initialized? Ques 59. What is the difference between char a[] = ?string?; and char *p = ?string?;? Ques 60. What?s the auto keyword good for? Ques 61. What is the difference between char a[] = ?string?; and char *p = ?string?; ? Ques 62. What does extern mean in a function declaration? Ques 63. How do I initialize a pointer to a function? Ques 64. How do you link a C++ program to C functions? Ques 65. Explain the scope resolution operator. Ques 66. What are the differences between a C++ struct and C++ class? Ques 67. How many ways are there to initialize an int with a constant? Ques 68. How does throwing and catching exceptions differ from using setjmp and longjmp? Ques 69. What is a default constructor? Ques 70. What is a conversion constructor? Ques 71. What is the difference between a copy constructor and an overloaded assignment operator? Ques 72. When should you use multiple inheritance? Ques 73. Explain the ISA and HASA class relationships. How would you implement each in a class design? Ques 74. When is a template a better solution than a base class? Ques 75. What is a mutable member? Ques 76. What is an explicit constructor? Ques 77. What is the Standard Template Library (STL)? Ques 78. Describe run-time type identification. Ques 79. What problem does the namespace feature solve? Ques 80. Are there any new intrinsic (built-in) data types? Ques 81. ?void*?.? Ques 82. What is the difference between Mutex and Binary semaphore? Ques 83. In C++, what is the difference between method overloading and method overriding? Ques 84. What methods can be overridden in Java? Ques 85. What are the defining traits of an object-oriented language? Ques 86. Write a program that ask for user input from 5 to 9 then calculate the average Ques 87. Assignment Operator - What is the diffrence between a "assignment operator" and a "copy constructor"? Ques 88. RTTI - What is RTTI? Ques 89. STL Containers - What are the types of STL containers? Ques 90. What is the need for a Virtual Destructor ? Ques 91. What is "mutable"? Ques 92. Differences of C and C++ Could you write a small program that will compile in C but not in C++ ? Ques 93. Bitwise Operations - Given inputs X, Y, Z and operations | and & (meaning bitwise OR and AND, respectively), what is output equal to in? Ques 94. What is a modifier? Ques 95. What is an accessor? Ques 96. Differentiate between a template class and class template. Ques 97. When does a name clash occur? Ques 98. Define namespace. Ques 99. What is the use of ?using? declaration. ? Ques 100. What is an Iterator class ? Ques 101. What is an incomplete type? Ques 102. What is a dangling pointer? Ques 103. Differentiate between the message and method. Ques 104. What is an adaptor class or Wrapper class? Ques 105. What is a Null object? Ques 106. What is class invariant? Ques 107. What do you mean by Stack unwinding? Ques 108. Define precondition and post-condition to a member function. Ques 109. What are the conditions that have to be met for a condition to be an invariant of the class? Ques 110. What are proxy objects? Ques 111. Name some pure object oriented languages. Ques 112. What is an orthogonal base class? Ques 113. What is a node class? Ques 114. What is a container class? What are the types of container classes? Ques 115. How do you write a function that can reverse a linked-list? Ques 116. What is polymorphism? Ques 117. How can you tell what shell you are running on UNIX system? Ques 118. What is pure virtual function? Ques 119. Write a Struct Time where integer m, h, s are its members Ques 120. How do you traverse a Btree in Backward in-order? Ques 121. What is the two main roles of Operating System? Ques 122. In the derived class, which data member of the base class are visible? Ques 123. Could you tell something about the Unix System Kernel? Ques 124. What are each of the standard files and what are they normally associated with? Ques 125. Detemine the code below, tell me exectly how many times is the operation sum++ performed ? Ques 126. Give 4 examples which belongs application layer in TCP/IP architecture? Ques 127. What?s the meaning of ARP in TCP/IP? Ques 128. What is a Makefile? Ques 129. What is deadlock? Ques 130. What is semaphore? Ques 131. Is C an object-oriented language? Ques 132. Name some major differences between C++ and Java. Ques 133. What is the difference between Stack and Queue? Ques 134. Write a fucntion that will reverse a string. Ques 135. What is the software Life-Cycle? Ques 136. What is the difference between a Java application and a Java applet? Ques 137. Name 7 layers of the OSI Reference Model? Ques 138. What are the advantages and disadvantages of B-star trees over Binary trees? Ques 139. Write the psuedo code for the Depth first Search. Ques 140. Describe one simple rehashing policy. Ques 141. Describe Stacks and name a couple of places where stacks are useful. Ques 142. Suppose a 3-bit sequence number is used in the selective-reject ARQ, what is the maximum number of frames that could be transmitted at a time?
https://www.withoutbook.com/InterviewQuestionList.php?tech=12&dl=Top&subject=C++%20Interview%20Questions%20and%20Answers
CC-MAIN-2020-50
refinedweb
1,617
77.84
On Tue, 2002-03-05 at 02:53, Eric van der Vlist wrote: > Title says it all, the extensibility of XML is one of its myths... I'm not sure it had to be mythical, nor am I convinced that extensibility is lost to all of use at this point. > Technically, XML is based on trees which are not the most extensible > structures (compared to tables or triples). If you extend a tree you are > likely to break its structure (and existing applications). I would say > that trees grow but are not "extended". For some applications, you may be right, and different approaches like RDF probably make sense in those cases. Still, I have a very hard time imagining writing my data in triples, and a harder time imagining triples having anywhere near the transparency that relatively flat tree structures have. Extending trees isn't difficult so long as you haven't tightly bound yourself to a particular vision of how the tree must absolutely postively precisely be structured. If you're willing to accept that adding a new branch to a tree or reorganizing a branch doesn't automatically make it a diabolical mutant, there's a lot more flexibility. Extensibility scares people - while object-oriented programming, for instance, permits all kind of extensibility through class and interface structures, it also has best practices which strongly encourage encapsulation, hiding all of those details from other parts of the program. XML has a tougher challenge here, as it is both extensible and extremely open. I prefer to think of XML as a broad syntax containing all of its possible applications rather than as a toolkit for building specific vocabularies. This perspective seems to encourage rather different best practices than focusing on vocabulary-building. The document contents are the foundation of processing, not a particular schema, and the same contents need to be visible ("fully composed", as Tim Bray put it yesterday) to all comers, without any vocabulary-specific knowledge. Once we've got that document, we can (and should) go anywhere. Tools like schemas, stylesheets, RDDL, and code should let us explore and describe possibilities, not choke off anything that doesn't conform to a particular vocabulary. > This technical limitation has been strengthen by a community which has > become conservative and any evolution seems deamed to fail. I think the conservatism has always been there. As radical as XML is ("Hey folks! Create your own labeled structures for information!"), a lot of people have sold it short. Incessant talk about the need for firm contracts between parties, a fondness for expert committees, and the continuing desire to couple program logic and information as tightly as possible are constant themes. To some extent, it's reasonable. Hurling XML into the world without such restraints would likely have created (even more of) a backlash against these crazy anarchists. Programmers are pretty conservative when it comes to data structures, and I suspect most of them have learned firm lessons from the limitations of earlier systems. Tools vendors are out to make money selling tools, not extensibility. I'm can't say I'm very impressed with the "official" line of evolution right now. I suspect the W3C is (as it seems to have always been) mired in a notion of XML as vocabulary toolkit rather than a syntactical continuum, and their output seems to confirm that. Recent discussions on www-tag about how tightly to bind processing resources to namespace URIs aren't exactly encouraging, either. (On the other hand, I'll admit to liking some of the vocabularies they're building there.) > XML is now legacy. Its users community is screaming against any change > and its specification body seems paralysed by its structure and the > diverging interests of its members... I'm not sure that makes XML as a whole legacy. It certainly means that those of us who want to do things differently have to get our points across in both words and code. I don't see the W3C or the tools vendors doing it, and I don't see the people for whom XML is just a small part of their job doing it. > It's probably time to look for the next wave! I suspect the next wave will still be markup of some kind. There may be tuples or triples involved somewhere, but I can't see them being on the surface. It may be a good time to start making waves, while the vendors are all in a corner marveling at Web Services. --: <>
http://aspn.activestate.com/ASPN/Mail/Message/xml-dev/1056751
crawl-002
refinedweb
757
60.95
Learn all about how Unit Tests in Unity work and how to use them in your projects in this great tutorial. This is a companion discussion topic for the original entry at Learn all about how Unit Tests in Unity work and how to use them in your projects in this great tutorial. Great article, Anthony! I am starting QA-ing a Unity engine based project and looking for the tool for the integration/ e2e tests. Do you maybe have experience with such tools, can you recommend any? Thank you. @anthonyuccello Can you please help with this when you get a chance? Thank you - much appreciated! :] As soon as I create the Assembly in my scripts folder I get ‘type or namespace’ errors. Why? @anthonyuccello Do you have any feedback about this? Thank you - much appreciated! :] I believe this article is not technically demonstrating unit testing, but in fact integration testing. More specifically, the “unit tests” here are a form of bottom-up integration testing. The type of test described by the article to be an integration test (i.e. testing in a production environment or simulating a user’s interaction) is more accurately top-down (or “big bang”) integration testing. The primary reason for this observation is that the AsteroidsMoveDown test resembles a simulation, a characteristic of an (albeit automated) integration test. It instantiates a Game Monobehaviour, then an Asteroid object, allows the simulation to run for a time with yield return new WaitForSeconds(...); and validates the resulting environment for the asteroid’s position. This would be an unsurprising format had this test been labeled an integration test. In truth, this “unit test” cannot be distinguished from an integration test which involved the “modules” of an asteroid, game and the game engine itself to validate the functional requirement that asteroids must fall over time. A few points for those learning to write unit tests, and for those trying to do so in Unity: yield return new WaitForSeconds(...). Writing unit tests is itself a skill, as is designing a testable architecture. Poor tests can be written for poor code, which accounts for the sentiment of many who hate writing unit tests. The tests in this article appear to be forced, adding relatively little value in contrast to the effort required to write and maintain them. To the writer’s credit, Unity is notoriously problematic for writing unit tests, due to the extensive coupling between Monobehaviours and the Unity engine itself. The symptoms of unit testing within Unity’s framework are identical to those of any large untested, highly coupled codebase. The difference is that, to the typical user of Unity, Unity’s source code is not available and so should be treated as a “third-party dependency.” Every part of the game that is coupled to the engine (e.g. by deriving from Monobehaviour) inherits the engine’s resistance to being tested. While it may be feasible to leverage unit tests while working with Unity, it is not reasonable to force it on such a reluctant system. I expect a correct approach to decouple the logic under test from the engine, something that is not accomplished if one is invoking the engine with its API, e.g. Instantiate(), GetComponent(), yield return new WaitForSeconds(...), etc. @riggy97 Can you post a screenshot? Are you missing an import? Can you double back and follow the steps exactly and confirm the issue you are getting (and share the full error)? Very interesting perspective Manticore…I wonder what @anthonyuccello would say in response? Regardless, as someone who has not done any testing in Unity, I found the article very well written and extremely informative. I look forward to learning more about this topic in the future! Thank you! Daniel This tutorial is more than six months old so questions are no longer supported at the moment for it. Thank you!
https://forums.raywenderlich.com/t/introduction-to-unity-unit-testing-raywenderlich-com/71714/8
CC-MAIN-2020-29
refinedweb
645
54.63
Minimalistic unit testing framework for Elm. In the root directory of an Elm project: elm-package install eunitto install the eunitElm package npm install eunit-runnerto install the command line test runner To generate a test example run eunit init in the root directory of an Elm project to which tests should be added. Test example should be created in test/Main.elm import Expectation exposing (eql, isTrue) import Test exposing (it, describe, Test) import Runner exposing (runAll) import Html exposing (Html) all : Test all = describe "Arithmetic operations" [ describe "Addition" [ it "should add two positive numbers" <| eql (1 + 2) 3 , it "should be commutative" <| eql (1 + 2) (2 + 1) , it "should be associative" <| eql ((1 + 2) + 3) (1 + (2 + 3)) ] , describe "Multiplication" [ it "should multiply two positive numbers" <| eql (2 * 3) 6 , it "should be commutative" <| eql (2 * 3) (3 * 2) , it "should be associative" <| eql ((2 * 3) * 4) (2 * (3 * 4)) ] , describe "Subtraction" [ it "should subtract two numbers" <| eql (2 - 3) -1 , it "should be commutative?" <| -- Failing test, subtraction is not commutative! eql (2 - 3) (3 - 2) , it "should be associative?" <| -- Failing test, subtraction is not associative! isTrue (((2 - 3) - 4) == (2 - (3 - 4))) ] ] main : Html msg main = runAll all Test structure should be self-explanatory, it is inspired largely by Jasmine. Some differences: itcan have only one expectation eql, isTrue, isFalse beforeEachand afterEachas tests are written for stateless functions and do not require setting up shared state Tests can be simply run in browser, just start elm-reactor in the root directory of the project and access test/Main.elm. Running tests in a browser can be a good way to debug a particular test failure. Make sure that eunit-runner NPM package is installed, run eunit in the root directory of the project. You should get an output like the following one: EUnit test runner Running test suite... Arithmetic operations .......xx Elapsed time: 491ms Passed: 7 FAILED: 2 Open in a browser for more details.
https://package.frelm.org/repo/179/1.0.0
CC-MAIN-2018-51
refinedweb
328
55.07
Create fractal music and change them into intriguing music patterns with a Raspberry Pi. Fractals seem to have gone out of favour when it comes to computers, which is a pity because there are plenty of exciting things to explore with them, especially in the field of music. Most people think of a fractal as a complex curve and there a few pleasant-looking standard examples. The basic property of a fractal is that it is self-similar; that is, you see the same sort of pattern if looking at a very small magnified portion of the curve as you do when you look at a zoomed out portion. They are both similar, but not of course identical. Music has a similar structure, with patterns of notes repeating but developing throughout the composition. This is a rich, and largely untapped, source of tunes and inspiration. Fractal music generation There are many ways to generate fractals, but here we will be looking at one method, the Lindenmayer system, or L-system for short. This is a recursive algorithm inspired by biological system; it works by successive applications of substitution rules to a string of symbols to generate another, normally longer, string of symbols. This output string is fed into the input again and a new string is generated. This process is repeated any number of times and produces a fractal, or self-similar, sequence. The rules and the initial string, called the axiom, determine the outcome. Let’s see how this works in practice by looking at a very simple example shown in Figure 1. This has just three symbols – A, B, and C – and each symbol has a rule for substitution. So when we encounter an A in the input stream, we replace it with the symbols BA in the output stream. When B is encountered, we replace it with a C. When C is found, we replace it with an AB. These rules are shown on the left of the diagram. If we start with the simple axiom of C after the first application of the rules, we get the string AB. Then run it through the rules again and the first symbol A is replaced by BA and the second symbol B is replaced by C. This applying of rules to an input string to produce an output string is known as a level of recursion; after four levels, our symbol string is ABCBABAC. The rules can be arbitrarily changed to produce different outcomes and the string can involve as many single-character symbols as you like. Rules about rules While the rules can be arbitrary, in order to be successful they need to follow some rules themselves. First of all, each symbol used must have a rule associated with it, and that symbol must occur in at least one of the results of another rule. If this is not observed then some symbols will be isolated and never appear in the output stream. If all the symbol rules map only to another symbol, then the output stream always remains the same length; sometimes you might want this, but normally you will want a sequence to grow. Once an output sequence – or successive sequences – has been produced then you need another set of rules, called production rules, to interpret it into, in our case MIDI notes. Let’s look at a simple example. Simple example The code in Simple.py generates a sequence of symbols and then plays them; this then repeats to a given level of recursion. The rules are expressed as a list of tuples; each tuple has two parts, the input condition and the output condition. To make things easy to interpret for us, we have added the string "->" to the end of the input string so that the tuple ("A->","AB") means replace the symbol A with the symbols AB. This just makes it easy for us to spot what a rule is doing and to change it. The code first opens the MIDI port and prints out the rules and axiom. Then these rules are applied six times and the result of each recursion is added to a list called composition. Finally, the composition is passed to a sonification function called sonify. The rules for turning the symbols into notes here are very simple: each symbol, A to G, is turned into a MIDI note number representing the notes A to G, defined by the notes list, and played for a time defined by the variable noteDuration. This plays each level of recursion with a short gap between each. Quitting the program with CTRL+C will cause the program to turn all the MIDI notes off before quitting so you don’t get any hanging notes. The code uses MIDI voice 19, the church organ, but you can change this to anything. If you want to alter this on the fly then you can load up the MIDI voice test program from The MagPi #63 to run at the same time. Just navigate to the folder containing it, using a Terminal window and type: nohup python3 voiceTest.py & A multichannel version The previous example just played a single instrument for a single line. In this next example, the last three levels of recursion are played at the same time on different instruments. As each level is of a different length, the note on time is adjusted so that the playing time as a whole is the same for each track. This means that the smaller levels of recursion have longer notes than the higher ones. A lot of the code is the same as the first example; so, instead of repeating the whole listing, we have just printed the changes you have to make to Simple.py in NewFunctionsfor_Simple.py. These changes are basically a new sonify function along with two additional functions, notFinished and playNext. The instruments and volume levels are set at the start of the sonify function, and we have used the ‘rain’ instrument for the long notes because it has something interesting going on in the background for held notes. Short notes, we think, are best when the sound itself is short, like a bell. Adding graphics To add some graphics requires a much longer program and our normal Pygame framework. We have written one that will produce the sound of the last example, only play it back by building up the composition by adding one track at a time. The screen output is shown in Figure 2 and the code can be found in our GitHub repository. It might not look like a classic fractal, but that is because of the very simple mapping of the sonification: one symbol represents one note. To get a bit more flexibility, we need to add a different sort of rules: that of interpreting the fractal string. Interpretation rules Interpretation rules are somewhat different to the rules we used before to produce a symbol string. They are not a sequence of substitutions but a set of things to do for when sonifying each symbol. For example, suppose we add some symbols in the production rules to alter the length of a note. When this symbol occurs, the note duration changes but no note is generated for that symbol; that means we can’t use the trick of altering the note length based on the length of the sequence. It’s easy enough to do this, however, and it adds a bit of variety to the composition. The extra code to add this feature is in ExtraCodeforNoteDuration_Functions.py and it shows what you have to change to the Simple.py code: it is just a replacement for the sonify function and a new axiom string and rules list. Here there are three lengths of note defined by the symbols q, h, and c, and these change the duration variable for subsequent notes. Adding a state machine These interpretation rules are still direct substitutions of notes and length of notes. To get another level of complexity you have to get these symbols to interact with a state machine, and use the latter to define various parameters of the music. When this system is used for producing fractal drawings, that state machine is a turtle graphics drawing package. Symbols represent turtle commands like move forward, turn left or right a specific angle, or move without drawing, to name but four. It is the cumulative result of these sorts of commands that determines what is drawn at any one time. In order to get separate branches, there are two other types of operation represented by symbols: the [ which places the turtle state on a stack, and the ] which restores the turtle state from a stack. You can have a look at such a system if you install a graphics program called Inkscape. When you run it, go to the Extensions menu, select Render, then the L-system option. You will get a window that allows you to set rules and turn angles; you can set more than one rule by separating them with a semicolon. Figure 3 shows a list of rules for fractals to set you off exploring. Bush Axiom: ++F Rules: F=FF-[-F+F+F]+[+F-F-F] Dragon Curve Axiom: FX Rules: X=X+YF; Y=FX-Y; Angle: 90 Koch Island. Axiom: -F--F--F Rules: F=F+F--F+F; Angle: 60 Other fractal Axiom: W Rules: W=+++X--F--ZFX+; X=---W++F++YFW-; Y=+ZFX--F--Z+++; Z=-YFW++F++Y---; Angle: 30 Penrose P3 Axiom: [N]++[N]++[N]++[N]++[N] Rules: M=OA++pA----NA[-OA----MA]++; N=+OA--PA[---MA--NA]+; O=-MA++NA[+++OA++PA]-; P=--OA++++MA[+PA++++NA]--NA; Angle: 36In the same way, you can implement a music turtle that determines the frequency, duration, and any effects you care to specify. So the range of notes is much wider than you can get from a one-to-one mapping of symbol to note. This musical turtle can be restrained to a certain range of parameters by wrapping round the values as they exceed their limits. The code for this is shown in Classic_Fractals.py and although it looks similar to the other listings, it does have many slight changes. For a start, the production rules have been changed to reflect the Inkscape system: where there is no rule for a symbol, that symbol is just copied to the output string. Also, the production rules match: any symbol A to F plays a note and updates the pitch, whereas any symbol G to L just updates the pitch. Note the initKey function; this generates a lookup table in any major key determined by the starting note. The rules in the listing are for a bush whose graphical representation is shown in Figure 4. ResultsWell, what does all this sound like? The uncharitable might say it sounds like a maniac practising scales, but there is a lot more to it than that. We liked the simpler systems best, as we felt there was a tune trying to break out and occasionally succeeding; you could definitely hear the self-similarity coming through. Small changes in rules produced small changes in melodies, which is good for control, and we liked the multitimbral approach of having more than one track playing at the same time. Taking it furtherLike no other project, this is one you just have to tinker with. You can have a lot of fun making up rules and listening to the results. This just requires typing them in at the start of the program. There are lots of variations you can make to the production rules, like including a probability factor to some. For example, you can have two rules for one symbol, and attach a probability that one rule will be used over another simply by generating a number from one to ten, and if the number is above some value then use rule one, otherwise use rule two. The production rules for the state machine can be changed to include note duration or even note timbre. For serious music it is probably best to pick out the good bits in a fractal sequence and incorporate that into your own music. import time, copy import rtmidi midiout = rtmidi.MidiOut() noteDuration = 0.3 axiom = "++F" # Bush rules = [("F->","FF-[-F+F+F]+[+F-F-F]")] newAxiom = axiom def main(): global newAxiom init() # open MIDI port offMIDI() initKey() print("Rules :-") print(rules) print("Axiom :-") print(axiom) composition = [newAxiom] for r in range(0,4): # change for deeper levels newAxiom = applyRules(newAxiom) composition.append(newAxiom) sonify(composition) def applyRulesOrginal(start): expand = "" for i in range(0,len(start)): rule = start[i:i+1] +"->" for j in range(0,len(rules)): if rule == rules[j][0] : expand += rules[j][1] return expand def applyRules(start): expand = "" for i in range(0,len(start)): symbol = start[i:i+1] rule = symbol +"->" found = False for j in range(0,len(rules)): if rule == rules[j][0] : expand += rules[j][1] found = True if not found : expand += symbol return expand def sonify(data): # turn data into sound initMIDI(0,65) # set volume noteIncrement = 1 notePlay = len(notes) / 2 midiout.send_message([0xC0 | 0,19]) # voice 19 Church organ lastNote = 1 for j in range(0,len(data)): duration = noteDuration # start with same note length notePlay = len(notes) / 2 # and same start note noteIncrement = 1 # and same note increment stack = [] # clear stack print("") if j==0: print("Axiom ",j,data[j]) else: print("Recursion ",j,data[j]) for i in range(0,len(data[j])): symbol = ord(data[j][i:i+1]) if symbol >= ord('A') and symbol <= ord('F') : # play current note #print(" playing",notePlay) note = notes[int(notePlay)] #print("note", note, "note increment",noteIncrement ) midiout.send_message([0x80 | 0,lastNote,68]) # last note off midiout.send_message([0x90 | 0,note,68]) # next note on lastNote = note if symbol >= ord('A') and symbol <= ord('L') : # move note notePlay += noteIncrement if notePlay < 0: # wrap round playing note notePlay = len(notes)-1 elif notePlay >= len(notes): notePlay = 0 time.sleep(duration) if symbol == ord('+'): noteIncrement += 1 if noteIncrement > 6: noteIncrement = 1 if symbol == ord('-'): noteIncrement -= 1 if noteIncrement < -6: noteIncrement = -1 if symbol == ord('|'): # turn back noteIncrement = -noteIncrement if symbol == ord('['): # push state on stack stack.append((duration,notePlay,noteIncrement)) #print("pushed",duration,notePlay,noteIncrement,"Stack depth",len(stack)) if symbol == ord(']'): # pull state from stack if len(stack) != 0 : recovered = stack.pop(int(len(stack)-1)) duration = recovered[0] notePlay = recovered[1] noteIncrement = recovered[2] #print("recovered",duration,notePlay,noteIncrement,"Stack depth",len(stack)) else: print("stack empty") midiout.send_message([0x80 | 0,lastNote,68]) # last note off time.sleep(2.0) def initKey(): global startNote,endNote,notes key = [2,1,2,2,1,2] # defines scale type - a Major scale notes =[] # look up list note number to MIDI note startNote = 24 # defines the key (this is C ) endNote = 84 i = startNote j = 0 while i< endNote: notes.append(i) i += key[j] j +=1 if j >= 6: j = 0 #print(notes)() import time, random, copy import rtmidi midiout = rtmidi.MidiOut() notes = [57,59,60,62,64,65,67] noteDuration = 0.3 axiom = "AD" rules = [("A->","AB"),("B->","BC"),("C->","ED"),("D->","AF"), ("E->","FG"),("F->","B"),("G->","D") ] newAxiom = axiom def main(): global newAxiom init() # open MIDI port offMIDI() print("Rules :-") print(rules) print("Axiom :-") print(axiom) composition = [newAxiom] for r in range(0,6): newAxiom = applyRules(newAxiom) composition.append(newAxiom) sonify(composition) def applyRules(start): expand = "" for i in range(0,len(start)): rule = start[i:i+1] +"->" #print("we are looking for rule",rule) for j in range(0,len(rules)): if rule == rules[j][0] : #print("found rule", rules[j][0],"translates to",rules[j][1]) expand += rules[j][1] return expand def sonify(data): # turn data into sound initMIDI(0,65) # set volume midiout.send_message([0xC0 | 0,19]) # voice 19 Church organ lastNote = 1 for j in range(0,len(data)): if j==0: print("Axiom ",j,data[j]) else: print("Recursion ",j,data[j]) for i in range(0,len(data[j])): note = notes[ord(data[j][i:i+1]) - ord('A')] # get note given by letter midiout.send_message([0x80 | 0,lastNote,68]) # last note off midiout.send_message([0x90 | 0,note,68]) # next note on lastNote = note time.sleep(noteDuration) midiout.send_message([0x80 | 0,lastNote,68]) # last note off time.sleep(2.0)() def sonify(data): melodyLines = 3 # change for more or less lines # for more melody lines add more elements to the next two lists instruments = [112, 0, 96] # instruments for each line volume = [50, 60, 65] # volume for ech line lastNote = [] index = [] startTime = [] interval = [] lineLength = [] for i in range(0,melodyLines): initMIDI(i,volume[i]) # setu up MIDI channel midiout.send_message([0xC0 | i,instruments[i]]) # set voice startTime.append(time.time()) # set up lists index.append(0) lastNote.append(0) interval.append(noteDuration * len(data[len(data)-1])/len(data[len(data)-1-i])) lineLength.append(len(data[len(data)-1-i])) print() ; print("Playing") for i in range(0,melodyLines): print("line",i,"voice",instruments[i],"length",lineLength[i], "notes of duration",interval[i],"seconds") while notFinished(melodyLines,lineLength,index) : for i in range(0,melodyLines): if time.time() - startTime[i] > interval[i]: lastNote[i] = playNext(i,index[i],lastNote[i],data,len(data)-1) index[i] += 1 startTime[i] = time.time() time.sleep(noteDuration) for i in range(0,melodyLines): midiout.send_message([0x80 | i,lastNote[i],68]) # last note off def notFinished(playingLines,length, point): notDone = True for i in range(0,playingLines): if point[i] >= length[i] : notDone = False return notDone def playNext(midiChannel, i , lastNote, data, line): note = notes[ord(data[line][i:i+1]) - ord('A')] # get note given by letter midiout.send_message([0x80 | midiChannel,lastNote,68]) # last note off midiout.send_message([0x90 | midiChannel,note,68]) # next note on return note axiom = "qAhD" rules = [("A->","ABc"),("B->","BCh"),("C->","EDq"),("D->","AFc"), ("E->","FGh"),("F->","Bq"),("G->","Dc"),("q->","hA"),("h->","qF"),("c->","hF") ] def sonify(data): # turn data into sound initMIDI(0,65) # set volume midiout.send_message([0xC0 | 0,19]) # voice 19 Church organ lastNote = 1 for j in range(0,len(data)): duration = noteDuration # start with same note length if j==0: print("Axiom ",j,data[j]) else: print("Recursion ",j,data[j]) for i in range(0,len(data[j])): symbol = ord(data[j][i:i+1]) if symbol >= ord('A') and symbol <= ord('G') : # it is a note note = notes[symbol - ord('A')] # get note given by letter midiout.send_message([0x80 | 0,lastNote,68]) # last note off midiout.send_message([0x90 | 0,note,68]) # next note on lastNote = note time.sleep(duration) else : # it is a note duration if symbol == ord('h'): duration = noteDuration * 2 if symbol == ord('c'): duration = noteDuration if symbol == ord('q'): duration = noteDuration / 2 midiout.send_message([0x80 | 0,lastNote,68]) # last note off time.sleep(2.0)
https://magpi.raspberrypi.com/articles/fractal-music-explore-musical-patterns
CC-MAIN-2022-27
refinedweb
3,162
59.33
Watching Miro's video on custom search indexes, around 40 minutes in he uses SearchHelper.CreateDocument and it returns a Lucene Document object. SearchHelper.CreateDocument Document I’m trying to re-create this. The key thing for me is gaining access to the boost properties of Lucene documents and fields. For example, as demonstrated here. This is important because I want the presence of a search term in certain fields to be deemed more relevant than in other fields. For example, if the search term is present in the page name this is clearly a more important result than if the term is elsewhere on the page. However, in my own tests, when I use SearchHelper.CreateDocument I get a CMS.DataEngine.ISearchDocument object and not a Lucene Document. I assume this is a change in Kentico 7 or 8. From what I can tell this is a simplified abstraction of the Lucene Document and does not offer any means of boosting particular fields. CMS.DataEngine.ISearchDocument Is boosting simply no longer possible with later versions of Kentico? This concerns me because it leaves me with very little control. I've seen suggestions that I would have to use Lucene's query parser syntax (e.g. this post) but I would much prefer if this was configured as part of the index as is traditional with Lucene. Advice much appreciated. You can cast the ISearchDocument to LuceneSearchDocument which comes from CMS.Search.Lucene or from CMS.Search.Lucene3, depending on the version you're using. in lucene 3, the LuceneSearchDocument has a property Boost, which can be set. using CMS.Search.Lucene3; ISearchDocument doc = SearchHelper.CreateDocument(documentParameters); var ldoc = doc as LuceneSearchDocument; ldoc.Document.Boost = 2.0f; It maybe this API update Class method Lucene.Net.Documents.Document CMS.SiteProvider.SearchHelper.CreateDocument(System.String, System.String, System.String, System.DateTime, System.String) was removed. You can use method CMS.Search.SearchHelper.CreateDocument(CMS.Search.SearchDocumentParameters) instead. Thanks Rui. But as I mentioned CMS.Search.SearchHelper.CreateDocument(CMS.Search.SearchDocumentParameters) returns an CMS.DataEngine.ISearchDocument object which is not a Lucene type and doesn't, as far as I can tell, expose any Boost properties. CMS.Search.SearchHelper.CreateDocument(CMS.Search.SearchDocumentParameters) Rui, thanks again. This is really helpful. However, the assembly CMS.Search.Lucene3 in my Kentico 8.2 Lib directory does not contain a LuceneSearchDocument class. What am I missing here? I only see the 4 classes in the screenshot below... Are you using v8.2 with out any hotfixes? It was missing in 8.2 but was fixed in 8.2.1. Search - LuceneSearchDocument class made public The 'LuceneSearchDocument' class was made public to support advanced search customization scenarios. You can apply the latest hotfix You're absolutely right. Thank you so much Rui, the hotfix has given me access to that class. Please, sign in to be able to submit a new answer.
https://devnet.kentico.com/questions/using-lucene-boosting-in-custom-smart-search-index
CC-MAIN-2018-09
refinedweb
486
53.37
This guide shows you how to use the BMP180 barometric sensor with the ESP32 to read pressure, temperature and estimate altitude. We’ll show you how to wire the sensor to the ESP32, install the needed library, and how to write the sketch in the Arduino IDE. Introducing the BMP180 Barometric Sensor The BMP180 is a digital pressure sensor and it measures the absolute pressure of the air around it. It features a measuring range from 300 to 1100hPa with an accuracy down to 0.02 hPa. Because temperature affects the pressure, the sensor comes with a temperature sensor to give temperature compensated pressure readings. Additionally, because the pressure changes with altitude, you can also estimate the altitude based on the current pressure measurement. Wiring BMP180 Sensor to the ESP32 The BMP180 barometric sensor uses I2C communication protocol. So, you need to use the SDA and SCL pins of the ESP32. The following table shows how to wire the sensor. Reading Temperature, Pressure, And Altitude In this section we’ll show you how to read pressure and temperature from the BMP180 barometric sensor using the ESP32. We’ll also show you how to estimate altitude. Parts required For this example, you need the following parts: - ESP32 Module (ESP32 DOIT DEVKIT V1 Board) – read ESP32 development boards comparison - BMP180 barometric sensor - Jumper wires You can use the preceding links or go directly to MakerAdvisor.com/tools to find all the parts for your projects at the best price! Schematic Wire the BMP180 barometric sensor to the ESP32 as shown in the following schematic diagram. Installing the BMP_085 Library One of the easiest ways to read pressure, temperature and altitude with the BMP180 sensor is using the BMP_085 library by Adafruit. This library is compatible with the BMP085 and the BMP180 sensors. Follow the next steps to install the library in your Arduino IDE: Open your Arduino IDE and go to Sketch > Include Library > Manage Libraries. The Library Manager should open. Search for “BMP085” on the Search box and install the BMP085 library from Adafruit. After installing, restart your Arduino IDE. Code The library provides an example showing how to get temperature, pressure, and altitude. Go to File > Examples > Adafruit BMP085 Library > BMP085test. /* * Rui Santos * Complete Project Details */ #include <Wire.h> #include <Adafruit_BMP085.h> Adafruit_BMP085 bmp; void setup() { Serial.begin(9600); if (!bmp.begin()) { Serial.println("Could not find a valid BMP085/BMP180(102000)); Serial.println(" meters"); Serial.println(); delay(500); } The code starts by importing the needed libraries: #include <Wire.h> #include <Adafruit_BMP085.h> You create an Adafruit_BMP085 object called bmp. Adafruit_BMP085 bmp; In the setup() the sensor is initialized: void setup() { Serial.begin(9600); if (!bmp.begin()) { Serial.println("Could not find a valid BMP085/BMP180 sensor, check wiring!"); while (1) {} } } Reading Temperature To read the temperature you just need to use the readTemperature() method on the bmp object: bmp.readTemperature() Reading Pressure Reading the pressure is also straighforward. You use the readPressure() method. bmp.readPressure() The pressure readings are given in Pascal units. Reading Altitude Because the pressure changes with altitude, you can estimate your current altitude by comparing it with the pressure at the sea level. The example gives you two different ways to estimate altitude. 1. The first assumes a standard barometric pressure of 10132 Pascal at the sea level. You get the altitude as follows: bmp.readAltitude() 2. The second method assumes the current pressure at the sea level. For example, if at the moment the pressure at the sea level is 101500 Pa, you just need to pass 101500 as an argument to the readAltitude() method as follows: bmp.readAltitude(101500) Demonstration Upload the code to your ESP32. Make sure you have the right board and COM port selected. Then, open the Serial Monitor at a baud rate of 9600. You should get the sensor readings, as shown in the following figure. Wrapping Up In this guide we’ve shown you how to use the BMP180 barometric sensor with the ESP32 to read pressure, temperature and estimate altitude. Now, you can take this project further and display the latest sensor readings on a web server. We have several examples you can modify to display the readings: - ESP32 Web Server with BME280 – Mini Weather Station - ESP32 DHT11/DHT22 Web Server – Temperature and Humidity using Arduino IDE We hope you’ve found this guide useful. If you like ESP32, make sure you take a look at the following resourceS: - Learn ESP32 with Arduino IDE (course) - MicroPython Programming with the ESP32 (eBook) - Getting Started with ESP32 - ESP32 Pinout Reference: Which GPIO pins should you use? - ESP32 Web Server – Arduino IDE Thanks for reading. 6 thoughts on “ESP32 with BMP180 Barometric Sensor – Guide” hello there.. may i know what software you are using to draw that esp32 schematic ? Hi. We use fritzing 🙂 Hi, for the DOIT ESP32 V1 board it was necessary put Wire.begin (21, 22); after the line serial.begin (9600);. The are many problens with this GPIOs of this board, then it is necessary . Hi Rodrigo. For us, it worked just fine the way it is. But if someone has problems with the pins, that is a good tip. Thanks for letting us know. Thank you for the tutorial. I used an ESP32 based M5Stickc. I put Wire.begin(0, 26, 10000); just before bmp.begin(). It works very good! Thanks for the suggestion!
https://randomnerdtutorials.com/esp32-with-bmp180-barometric-sensor/
CC-MAIN-2019-47
refinedweb
898
58.08
a random 12-digit mac address generator Project description randmac.py a utility that generates 12-digit mac addresses; either the NIC portion or full 12-digit MAC. the optional -f argument will return a random 12-digit MAC address that can be identified by the locally administrated address (LAA) format. This means you will always see x2, x6, xA, or xE at the beginning of a MAC address generated by randmac. installation to install with pip: pip install randmac requirements Python >3.2 required. mac address formats Supported MAC address formats: - MM:MM:MM:SS:SS:SS - MM-MM-MM-SS-SS-SS - MM.MM.MM.SS.SS.SS - MMMM.MMSS.SSSS - MMMMMMSSSSSS where M stands for the manufacturer or vendor, and S stands for the NIC specific portion. usage you can from randmac import RandMac and use it like RandMac(). if you wish to change the mac address format. provide a sample mac so randmac knows what the output format should be. you can from randmac import RandMac and use it like RandMac("0000.0000.0000"). from a terminal (if the the console scripts entry point randmac is in your path and executable) you can use randmac to get a generate a new 12-digit LAA address, or randmac 00:00:00:00:00:00 -p to generate a MAC with the same OUI, but a different NIC portion. example usage >>> from randmac import RandMac >>> RandMac() 'a6:9b:6b:8e:b3:42' >>> RandMac("00:00:00:00:00:00", True) '00:00:00:3f:8a:06' >>> RandMac("0000:0000:0000", True) '0000007ce662' >>> RandMac("0000:0000:0000") '06eb4584d1e3' or > randmac fa:bf:7c:5d:65:3e > randmac 00-00-00-00-00-00 -p 00-00-00-dd-5f-16 license license can be found here. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/randmac/
CC-MAIN-2021-21
refinedweb
326
54.12
A lexicographic order is an arrangement of characters, words, or numbers in alphabetical order, that is, the letters are sorted from A-Z. This is also known as dictionary order because it is similar to searching for a particular word in an actual dictionary. We start by searching for the section containing the words starting with the first letter of the desired word. From that section, we search for the words that contain the second letter of the desired word, and so on. Let’s understand this with the help of the following illustration: In the above example, we initially had a list of prime numbers: {2, 3, 5, 7, 11, 13, 17, 19, 23, 29}. After lexicographically arranging these numbers, the list becomes: {11, 13, 17, 19, 2, 23, 29, 3, 5, 7}. Let’s have a look at another example to better understand this concept. We have a list containing two words, {educative, educated}. We want to sort this list in lexicographical order. We’ll compare these two words letter by letter and sort these letters alphabetically. As a result, our list will become {educated, educative}. #include <iostream> using namespace std; int find_min(int * arr, int size) { //size must be greater than zero int min = arr[0]; int temp = 0; for(int i = 1; i < size; i++) { temp = arr[i]; while (temp / 10 != 0) { temp = temp / 10; } if (min > temp) { min = temp; } } return min; } int find_max(int * arr, int size) { //size must be greater than zero int max = arr[0]; for(int i = 1; i < size; i++) { if (max < arr[i]) max = arr[i]; } return max; } void should_print(int number, int toCompare){ int temp = number; while (temp / 10 != 0) { temp = temp / 10; } if(temp == toCompare) cout << number << " "; } void print_lexicographically(int * arr, int size) { int min = find_min(arr, size); int max = find_max(arr, size); int k; int val; while (min <= max) { for(int j = 0; j < size; j++){ should_print(arr[j], min); } min += 1; } } int main() { // your code goes here int arr[10] = {2, 3, 5, 7, 11, 13, 17, 19, 23, 29}; print_lexicographically(arr, 10); return 0; } Note: The provided array or set must be totally ordered. arrayof size 10containing the following values: {2, 3, 5, 7, 11, 13, 17, 19, 23, 29}. print_lexicographically()function and pass the array and its size as arguments. minvariable by 1in line 61. should_print()function to print the sequence whose first digit matches the minimum value. should_print(), we split the integer to get the first digit. We then check whether that digit matches the minimum value. If it does, then we print this digit. RELATED TAGS CONTRIBUTOR View all Courses
https://www.educative.io/answers/what-is-a-lexicographic-order
CC-MAIN-2022-33
refinedweb
438
60.85
Introduction: Making LCD Thermometer With Arduino and LM35/36 Hello, take you less than 10 minutes to build it, once you have all the parts of course. :) Step 1: Gathering the Parts These are the part you need to build the thermometer: -. Also you can buy them separately from the following stores: Adafruit, SparkFun, Aliexpress, Banggood, etc. *If you dont have a 10k pot you can use 50k like me ! Step 2: Building the Thermometer By following the Fritzing schematic above, plug the LCD in the breadboard and then connect it to the Arduino board with jumpers. After that plug the potentiometer and the sensor in the breadboard, connect the left and the right pins of the pot to ground and +5V and the middle one to the LCD display. Then connect the sensor to ground and to +5V and to the Arduino but be very carefully, because if you connect it wrong the sensor will heat up to 280+ C(540 F) and might get damaged. Once you have connected everything move on the next step. Step 3: Programming the Arduino To get it work you have to use one of the two codes below. Upload it to your Arduino using theintegrated development environment, for short IDE, which you can download from Arduino's official page and you are done !!! If you don't see anything on the LCD or you see rectangles, turn the pot clockwise/anti-clockwise until you see the letter clear. Now you have a thermometer and you can measure the temperature of the air around you, inside your house or outside. The first code is from Gaige Kerns, and it can be used to read from LM36 or LM35. Thanks Gaige !!! Also check out my new thermometer project here !!! <p>// include the library code #include // initialize the library with the numbers of the interface pins LiquidCrystal lcd(12, 11, 5, 4, 3, 2); // initialize our variables int sensorPin = 0; int tempC, tempF; void setup() { // set up the LCD's number of columns and rows: lcd.begin(16, 2); } void loop() { tempC = get_temperature(sensorPin); tempF = celsius_to_fahrenheit(tempC); lcd.setCursor(0,0); lcd.print(tempF); lcd.print(" "); lcd.print((char)223); lcd.print("F"); delay(200); } int get_temperature(int pin) { // We need to tell the function which pin the sensor is hooked up to. We're using // the variable pin for that above // Read the value on that pin int temperature = analogRead(pin); // Calculate the temperature based on the reading and send that value back float voltage = temperature * 5.0; voltage = voltage / 1024.0; return ((voltage - 0.5) * 100); } int celsius_to_fahrenheit(int temp) { return (temp * 9 / 5) + 32; }</p> #include <LiquidCrystal.h> LiquidCrystal lcd(12, 11, 5, 4, 3, 2); //Digital pins to which you connect the LCD const int inPin = 0; // A0 is where you connect the sensor void setup() { lcd.begin(16,2); } void loop() { int value = analogRead(inPin); // read the value from the sensor lcd.setCursor(0,1); float millivolts = (value / 1024.0) * 5000; float celsius = millivolts / 10; lcd.clear(); lcd.setCursor(0,0); lcd.print(celsius); lcd.print("C"); lcd.setCursor(0,1); lcd.print((celsius * 9)/5 + 32); //turning the celsius into fahrehait lcd.print("F"); delay(1000); } 3 People Made This Project! Recommendations We have a be nice policy. Please be positive and constructive. 53 Comments HI, I am making a thermometer as my project for my apprenticeship. I want to make it one that will measure body temperature and that you can program different temps into it. For example, if a persons body temperature calls between 36.5⁰C-37.2⁰C a green led will come on to say that it is normal. Any advice would be appreciated. Thank you Hi , is it possible to add esp8266 between lcd and lm35 ? i wanna monitor the temperature via web browser too First of all, great project. But it doesn't work for me. By the way, i'am a newbie. Everything works, but as you can see on the pic, i'am in my living room an not in a sauna. So please, help.... Thank you. Hey there, I tried all the codes provided, yet I'm still getting a very non-reasonable pattern of readings like this 7.73, 0.00, 41.57, 3.97 ,0.00 ,41.78 ,3.97 ,0.00 ,40.82 ,6.34 ,0.00 ,25.78... in Celsius. Any help? Thanks in advance Can you provide a photo of your hardware setup ? I don't have an LCD so I'm viewing the results on the serial monitor, and so I made these connections. The arduino is powered by the laptop itself. follow instructions to the letter and pin. problem!!! lcd backlight won't turn on!!! PLEASE HELP!!!!! Can you possibly also make this without the screen, but instead have the arduino measure the temperature of something periodically and have it gather data for you? thank you!
http://www.instructables.com/id/Electronic-Thermometer-with-Arduino-UNO/
CC-MAIN-2018-09
refinedweb
822
72.36
On 12/06/2019 14:27, Marc Gonzalez wrote:> b20c5249aa6a ("backlight: Fix compile error if CONFIG_FB is unset")> added 'default m' for BACKLIGHT_CLASS_DEVICE and LCD_CLASS_DEVICE.It took me some little while until I realized this patch is from 2005 which explains why I couldn't find it in the modern git repo!> Let's go back to not building support by default.At first glance disabling this by default looks like it would cause some existing defconfig files to disable useful drivers.For backlight I think this isn't true (because both DRM and FB_BACKLIGHT have a "select" on BACKLIGHT_CLASS_DEVICE).However for LCD it is not nearly as clear cut. Commit message needs to explain why this won't cause unacceptable problems for existinng defconfig files.Daniel.> > Signed-off-by: Marc Gonzalez <marc.w.gonzalez@free.fr>> ---> drivers/video/backlight/Kconfig | 2 --> 1 file changed, 2 deletions(-)> > diff --git a/drivers/video/backlight/Kconfig b/drivers/video/backlight/Kconfig> index 8b081d61773e..40676be2e46a 100644> --- a/drivers/video/backlight/Kconfig> +++ b/drivers/video/backlight/Kconfig> @@ -10,7 +10,6 @@ menu "Backlight & LCD device support"> #> config LCD_CLASS_DEVICE> tristate "Lowlevel LCD controls"> - default m> help> This framework adds support for low-level control of LCD.> Some framebuffer devices connect to platform-specific LCD modules> @@ -143,7 +142,6 @@ endif # LCD_CLASS_DEVICE> #> config BACKLIGHT_CLASS_DEVICE> tristate "Lowlevel Backlight controls"> - default m> help> This framework adds support for low-level control of the LCD> backlight. This includes support for brightness and power.>
https://lkml.org/lkml/2019/6/20/479
CC-MAIN-2019-30
refinedweb
244
54.63
docopt-godocopt-go An implementation of docopt in the Go programming language. docopt helps you create beautiful command-line interfaces easily: package main import ( "fmt" "github.com/docopt/docopt-go" ) func main() { usage := .` arguments, _ := docopt.ParseDoc(usage) fmt.Println(arguments) } docopt parses command-line arguments based on a help message. Don't write parser code: a good help message already has all the necessary information in it. InstallationInstallation ⚠ Use the alias "docopt-go". To use docopt in your Go code: import "github.com/docopt/docopt-go" To install docopt in your $GOPATH: $ go get github.com/docopt/docopt-go APIAPI Given a conventional command-line help message, docopt processes the arguments. See for a description of the help message format. This package exposes three different APIs, depending on the level of control required. The first, simplest way to parse your docopt usage is to just call: docopt.ParseDoc(usage) This will use os.Args[1:] as the argv slice, and use the default parser options. If you want to provide your own version string and args, then use: docopt.ParseArgs(usage, argv, "1.2.3") If the last parameter (version) is a non-empty string, it will be printed when --version is given in the argv slice. Finally, we can instantiate our own docopt.Parser which gives us control over how things like help messages are printed and whether to exit after displaying usage messages, etc. parser := &docopt.Parser{ HelpHandler: docopt.PrintHelpOnly, OptionsFirst: true, } opts, err := parser.ParseArgs(usage, argv, "") In particular, setting your own custom HelpHandler function makes unit testing your own docs with example command line invocations much more enjoyable. All three of these return a map of option names to the values parsed from argv, and an error or nil. You can get the values using the helpers, or just treat it as a regular map: flag, _ := opts.Bool("--flag") secs, _ := opts.Int("<seconds>") Additionally, you can Bind these to a struct, assigning option values to the exported fields of that struct, all at once. var config struct { Command string `docopt:"<cmd>"` Tries int `docopt:"-n"` Force bool // Gets the value of --force } opts.Bind(&config) More documentation is available at godoc.org. Unit TestingUnit Testing Unit testing your own usage docs is recommended, so you can be sure that for a given command line invocation, the expected options are set. An example of how to do this is in the examples folder. TestsTests All tests from the Python version are implemented and passing at Travis CI. New language-agnostic tests have been added to test_golang.docopt. To run tests for docopt-go, use go test.
https://go.ctolib.com/docopt-go.html
CC-MAIN-2019-04
refinedweb
442
59.7
On Wed, 29 Oct 2003, peter reilly <peter.reilly@corvil.com> wrote: > On Wednesday 29 October 2003 10:42, Stefan Bodewig wrote: >> On Fri, 24 Oct 2003, peter reilly <peter.reilly@corvil.com> wrote: >> > <ac:if> >> > <equals arg1="a" arg2="${prop}"/> >> > <then> >> > blab ... >> > </then> >> > <else> >> > blab.. >> > </else> >> > </ac:if> >> >> Assuming Ant's core URI was associated to the default namespace, >> you'd have to use ac:then and ac:else. > > This is not the current ant 1.6 behaviour. All elements belong to > the core URI, except typedef'ed elements and whatever the user > wants to pass to DynamicConfigurable# Well, but isn't what I describe be the expected behavior? Expected by XML namespace aware tools, at least Stefan --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org For additional commands, e-mail: dev-help@ant.apache.org
http://mail-archives.apache.org/mod_mbox/ant-dev/200310.mbox/%3Cm31xswuq6i.fsf@bodewig.bost.de%3E
CC-MAIN-2014-42
refinedweb
138
59.09
I'm using this package which works quite well for face detection, tracking. However, the OpenCV feature I am MOST interested in is background subtraction. ie achieving green screen like effects without green screen. So that someone walking in front of a static background would be masked out from the background. I emailed the developer and he replied saying it is supported but there are no actual demos to get it working.:( Can any C# gurus help? I've used OpenCV before, but I'm not familiar with that wrapper. Since it seems to be a commercial product (and not a cheap one!), I recommend you ask the developer for more assistance. thanks already have:( I found this example on github, since the C# package is based on OpenCV for Java, it should be mostly converting the logic in this Java class to C# Hello eco_bach, did you find a solution to your question meanwhile ? Best, Andre Answer by zohaibzaidi · Jul 03, 2019 at 06:34 AM Came across this. Not sure but the Opencv package that I have for unity3d it has an example scene by the name of GreenScreenExample. Answer by wuy420 · 2 days ago Hey I am also looking for background subtraction solutions. I tried the BackgroundSubtraction demo in OpenCV but the result is awful. It removes my face after a while if I don't move. I asked their technical support and they suggest me to look at the greenscreen example. But again, it is still awful. Is there any way to improve greenscreen example or any other solution about background subt. import video from camera 3 Answers OpenCV integration 2 Answers How do I send a screenshot to another application? 0 Answers Opencv2.framework not loaded? 1 Answer Using of Texture2D.LoadImage 0 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/984052/background-subtraction-using-opencv.html
CC-MAIN-2021-17
refinedweb
302
75.4
Read / write file bit by bit Tomita Militaru Ranch Hand Joined: Jan 16, 2009 Posts: 37 posted Nov 02, 2009 14:52:55 0 Hello, I need to do this assignment in which I need to read / write from / to a file bit by bit using a buffer. When the buffer has 8 bits, I write a byte to the file, same for reading. Here is why I did so far, I'm using String for reading/writing and I'm sure this is not a good/eficient way to do it. If there is another way can someone please suggest, I don't need the solution directly, just some hints. Thanks import java.io.BufferedInputStream; import java.io.BufferedWriter; import java.io.DataInputStream; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.FileWriter; import java.io.IOException; /** * Operations read / write by bit * @author Tomita Militaru * */ public class BitOperations { /** * Buffer to keep read stuff */ static String readBuffer = ""; /** * Buffer to keep write stuff */ static String writeBuffer = ""; static int readCounter = 0; static int writeCounter = 8; private static int nextByte = 0; /** * My method to write a bit to a file * * @param value */ public static void writeBits() { try { FileOutputStream output = new FileOutputStream("output.txt"); Byte value_Byte = Byte.valueOf(writeBuffer); output.write(value_Byte); output.close(); } catch (Exception e) { System.err.println("Error: " + e.getMessage()); } } public static void writeBit(int bit) { if (writeCounter != 0) { writeCounter--; } else { writeBits(); writeBuffer = ""; } writeBuffer = writeBuffer.concat(String.valueOf(bit)); } /** * My method to return a bit from a file * * @return a bit */ public static void readBits() { File file = new File("input.txt"); FileInputStream fis = null; BufferedInputStream bis = null; DataInputStream dis = null; byte continut = 0; int k = 0; readCounter = 7; try { fis = new FileInputStream(file); bis = new BufferedInputStream(fis); dis = new DataInputStream(bis); while (dis.available() != 0 && k <= nextByte) { continut = dis.readByte(); Byte continut_Byte = new Byte("10"); readBuffer = Integer.toBinaryString(continut_Byte.intValue()); k++; } nextByte++; fis.close(); bis.close(); dis.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } public static int readBit() { int readBufferLenght = readBuffer.length(); System.out.println("readbuffer: " + readBufferLenght); if (readCounter != 0) { int pos = readBuffer.length() - readCounter--; System.out.println("pos: " + pos); return Integer.parseInt(readBuffer.substring(pos, pos + 1)); } else { readBits(); return readBit(); } } /** * @param args */ public static void main(String[] args) { for (int i = 0; i < 64; i++) { writeBit(readBit()); //just for testing, basically i simulate a copy/paste from one file to another. } } } The read method works, but I would like to rewrite the class using another method do it. Poor is the man whose pleasures depend on the permission of another. I agree. Here's the link: subject: Read / write file bit by bit Similar Threads problem while dispalying from file Reading Data from a file and extracting the data Transferring file name then file data over socket How to read and write for Image and PDF files Convert file to a String All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/469232/java/java/Read-write-file-bit-bit
CC-MAIN-2015-18
refinedweb
504
58.69
11 May 2010 23:09 [Source: ICIS news] HOUSTON (ICIS news)--Chemical dispersant makers such as Nalco with products used in Gulf oil spill recovery efforts are unlikely to face market criticism for sales gained from the tragedy, chemical executives said on Tuesday. “In all fairness to Nalco, analysts have said this won’t add much to earnings,” said Nick Kob, a business development manager for Huntsman. “It’s not like they’re gouging,” he continued. “They’re selling it for what they sell it for, and just happen to have a huge demand right now. We don’t see them as gouging or pilfering the market.” Kob spoke at the Informex Specialty Chemical Conference in ?xml:namespace> Jeff Gates, director of marketing for international fine and specialty chemical group Syrgis, referenced DuPont in the early 20th century as an example of a chemical company that lost a public relations battle in the aftermath of a disaster. “Any time you play with a disaster like that, there is DuPont,” Gates said. “DuPont used to sell black powder in World War I, and they got a nasty name because they went to both parties and made money, and then had to exit the business.” But Gates said he saw no activity in the modern marketplace mirroring that example. Moreover, given the complexity of product sales and number of challengers, strategies built around such tragedies as the Gulf spill would be unsuccessful, he noted. “Spills are not something you can put into a forecast,” Gates said. “Because of that, you can’t build a business around it.” While agreeing the broader market impact would be small, a principal scientist with Netherlands-based producer DSM warned that the relatively new nature of deepwater drilling meant that such a situation would likely be repeated. “Unfortunately, it’s going to happen again,” said DSM’s David Ager. “It’s just a matter of when. “But I don’t think you should let tragedy get in the way of doing what you’re doing," he continued. “How big is that market, really?” Ana Nielsen, the director of exploration and production (E&P) technology for BP, was present during the discussion. BP operated the Deepwater Horizon offshore rig in the Gulf of Mexico that exploded on 20 April and later sank, spilling an estimated 5,000 bbl/day of oil. She said that BP was looking for all the help it could find. “I think this has been as open a process as any I can remember,” Nielsen said. “Whoever has suggestions or things that can help us, we’ll listen.” For more on BP, DSM, DuPont or Hunts
http://www.icis.com/Articles/2010/05/11/9358513/us-dispersant-makers-unlikely-to-see-spill-sales-backlash.html
CC-MAIN-2014-42
refinedweb
442
62.07
XML is becoming the standard method for storing and exchanging structured data. Here’s the essential information you need.=”’single nnnn;, where nnnn is the Unicode character number in decimal. If you prefer to use hexadecimal, then you can use the format xhhhh;. Using numeric character references, you can put a literal < character in your document like this: <expr>3 < 4</expr> The common markup characters (<, >, &, ‘, and “) need to be escaped so frequently in XML documents that XML defines mnemonic references, called entities, for them, as summarized in Table One. Table One: Predefined Character Entities Character Entity Numeric < < < > > > & & & ‘ ' ' ” " " For the purpose of our sample address book, US ASCII character encoding is sufficient. If you were using an encoding that contained the accented character directly (ISO Latin 1, for example) then you could just put that character in the document. The other thing to notice about Listing Five is that the <street> element occurs only once. In the comma-delimited format, it was important to put the blank second field in the file. In XML this is not necessary. There’s no need to count commas any more and there’s no reason to insert empty fields. There’s little new to learn in the third or fourth address, so it is time to pull all of the addresses together into an XML document. To do that, however, there are a few more concepts to investigate. Suppose you want to insert notes to yourself in an XML file and don’t want them processed. Or, maybe you need to provide hints directly to your XML processor. You can do so with comments and processing instructions. XML comments, like comments in most programming languages, are meant for annotations that aren’t expected to have an influence on subsequent processing. Many XML processors discard comments when parsing. A comment looks like this: <!– this is a comment –> Unfortunately, XML comments cannot be nested. The sequence – is forbidden inside a comment. Processing Instructions Sometimes you want to pass additional information to a specific processing application. For example, you may want to tell the application that is processing your data what filename to use for some piece of output or where it’s acceptable to insert a line break in a long title. Processing instructions are the way to pass this information to a processing application. Processing instructions have the form: <?target any-data-you-want?> All processing instructions must begin with a target. The target is simply a name. Everything after the name, up to the closing ?>, is part of the processing instruction. Although there is no requirement to do so, it has become traditional to use pseudo-attribute syntax in processing instructions, like this: <?addressbook preferred-phone=”work”?> Target names beginning with xml (in any combination of upper and lower case) are reserved. Note that target names cannot be namespace qualified, so if you’re making one up, try to make it globally unique in some other way. Building XML Documents Now it’s time to pull the address records together into a whole document. There are only two things left to do — add a document element around the records and add an XML declaration to the top. For historical reasons related to XML 1.0 validity checking, an XML document must have a single, outer-most element called the document or root element. In this case, it makes sense to use <addressbook>. For now, just think of the XML declaration as identifying the file as XML (and specifically, the version of XML that we’re using — there’s only one, 1.0). Listing Six (pg. 46) shows our entire address book. Note the comment, the processing instruction, and the escaped & in the third address. Listing Six: The Address Book in XML <?xml version=’1.0′?> <!– Converted from comma separated value form on 8 May 2001 –> <?addressbook preferred-phone=’work’?> <addressbook> <address> <name>Jane Smith</name> <street>15 Pine St</street> <street>Suite 304</street> <city>Springfield</city> <state>OR</state> <zip>92744</zip> <phone type=”work”>555-1234</phone> </address> <address> <name>John Valéry</name> <street>123 Any Street</street> <city>Anytown</city> <state>MA</state> <zip>01034</zip> </address> <address> <name>Barney & Betty Rubble</name> <street>3 Rock Terrace, Apt 2b</street> <city>Bedrock</city> <state>MT</state> <zip>88432-2433</zip> </address> <address> <name>Tucker Case</name> <street>533b Ridge Rd</street> <city>Pine Grove</city> <state>CA</state> <zip>92315</zip> <phone type=”home”>555-5309</phone> </address> </addressbook> Well Formed Documents In order for a document to truly be considered XML, it must be what is known as a “Well Formed Document.” Being well formed means that a document satisfies certain minimum requirements. In brief, these include: This is not an exhaustive list, but it is sufficient for an introductory understanding. The bottom line, however, is that documents that aren’t well formed simply aren’t XML. The Document Prolog Most XML documents begin with a prologue. Occasionally, the prologue can be quite extensive, but it usually contains just two things — the XML Declaration and the Document Type Declaration (DTD). The XML Declaration, which must be the first thing in an XML file if it is present at all, serves three purposes: 1. It identifies the XML version to which the document conforms. At this time, the version must be 1.0. The version number is required. 2. It may identify the character encoding used by the document. Although XML uses Unicode, there are many different ways to represent the characters of a document (ISO Latin 1, KOI8-R, Big5, etc.). If the declaration is not present, XML systems attempt to “sniff” the encoding (from MIME headers or the file system, for example). XML Processors are only required to support the UTF-8 and UTF-16 encodings, but in practice, most support many more. 3. It may identify itself as a standalone document. Standalone documents assert that there are no external declarations that affect the information passed to the processing application. In practice, it is not often used. The format of an XML Declaration is: <?xml version=’1.0′ encoding=”US-ASCII” standalone=’no’?> where the version is required and the other parts are optional. (The above is the declaration used on the source document for this article.) The other common prologue element is the Document Type Declaration: <!DOCTYPE element-name PUBLIC “-//Example//DTD Example Public Identifier//EN//XML” “ example.dtd”> This declaration serves two purposes: it identifies the root element of the document (<element-name>) and it associates an external set of declarations, the DTD (Document Type Definition), with the document. The DTD is used to assist the parser in “validating” the document. Validity One of the important features that an XML parser can provide is the ability to validate a document. A DTD contains a set of declarations that identify additional rules and constraints that a document must satisfy in order to be considered a valid instance of the document type associated with that DTD. For example, a DTD might state that every <address> record is required to have <name>, <street>, and <phone> elements. It may go on to specify the types of data that those elements may contain. A document is valid if and only if it satisfies the constraints of the DTD with which it is associated. Examples include: As with the well-formed constraints, this is not an exhaustive list. There are a number of additional validity constraints that are beyond the scope of this article. Why Bother with Validity? There are a great many applications that can be satisfied with nothing more rigorous than well-formedness. For these applications, it is sufficient to ignore data that wasn’t expected or to process it in some default way. But there are also a lot of applications that need more control. If you wouldn’t expect to find an expense report in the middle of your purchase order, or an HTML <div> in the middle of your MathML formula, you better check to make sure that the author didn’t put one there. XML validity is an easy way to satisfy the expected constraints when the document is being parsed. The XML Recommendation describes just one kind of validity, based on XML DTDs. And while DTDs have a lot of power, they aren’t always the best tool for the job. As a result, there are now several other languages that can be used (instead of DTDs) to specify the validity constraints for a given XML document. These languages are often referred to as schema languages. Some of the more popular schema languages are W3C XML Schema, RELAX, TREX, and Schematron. You’ll find references for these schema languages in the Resources sidebar (pg. 48). Namespaces Many people are excited by the prospect of mixing “XML vocabularies” together. A vocabulary, in this case, means a specific set of XML elements and attributes; HTML is a vocabulary, as are DocBook, MathML, SVG (Scalable Vector Graphics), and every other defined set of tag names that you have seen. Historically, vocabularies were defined only by DTDs, but today you see vocabularies defined by other schema languages as well. Mixing vocabularies together allows you to apply things you already know to solve new problems. For example, a purchase order document might allow HTML elements in the description of an item, or an SVG document might allow a MathML equation as part of its content. This raises an important question; how can elements from different vocabularies be distinguished? For example, both HTML and SVG define a <title> element, but they are quite different. If you are mixing SVG graphics into a DocBook document, how do you distinguish between them? A namespace gives elements and attributes globally unique names. It does this by associating names with Uniform Resource Identifiers (URIs, see for details). URIs are long and tedious to type, and they include characters that aren’t legal in XML names, so the Namespaces Recommendation defines a shortcut. Using an attribute-like syntax, the declaration xmlns associates a namespace prefix with a URI: <e:doc xmlns:e=”“> <e:para>A paragraph.</e:para> </e:doc> The prefix e: is bound to the specified URI on the <doc> element and all of its children. Logically, what this says is, “this occurrence of the <doc> element is the one defined by the namespace.” Similarly, the <para> element is the one defined by that namespace name as well. It is important to note that the prefix is irrelevant. This document is exactly the same as the preceding one: <X:doc xmlns:X=”“> <X:para>A paragraph.</X:para> </X:doc> If you’re going to be using predominantly one namespace, you can make that the default by declaring it with xmlns =”…”. This next document is logically identical to the preceding two: <doc xmlns=”“> <para>A paragraph.</para> </doc> Note, however, that the following document is completely different from the ones above: <doc> <para>A paragraph.</para> </doc> Because it lacks a namespace declaration, its elements are different from those of all other namespaces. So if you want to mix two vocabularies, you can do it with multiple namespaces as show in Listing Seven (pg. 47). Listing Seven: The Second Address in XML <html xmlns=”” xmlns:svg=”“> <head> <title>The title of the HTML</title> </head> <body> <p>Here’s some SVG:</p> <svg:svg> <svg:title>The title of the SVG</svg:title> <!– … –> </svg:svg> </body> </html> The default namespace is HTML and the svg: namespace is SVG, so the two title elements are entirely distinct. You can change namespaces on the fly, so the example in Listing Seven can even be written like Listing Eight. Listing Eight: The Second Address in XML <html xmlns=”“> <head> <title>The title of the HTML</title> </head> <body> <p>Here’s some SVG:</p> <svg xmlns=”“> <title>The title of the SVG</title> <!– … –> </svg> </body> </html> In Listing Eight, the two title elements are still distinct. Because the svg element redefines the default namespace, the namespace name of the SVG title element is, even though it has no explicit prefix. When the svg element ends, the previous declaration goes back into effect. </article> XML is the best way to store structured information. It is an open, accessible technology that is being actively developed and widely deployed. If you’re interested in writing your own XML-aware applications, there are lots of XML libraries out there; there’s probably one available for your favorite language. We’ve just barely scratched the surface of XML. Keep an eye out in future issues for an article about XML programming using Java, Perl, and other languages. In the meantime, whether you are a hardcore open source developer or just beginning to explore the Linux waters, this article has hopefully given you a basic understanding of what XML is made of. Resources A great community site with news, tutorials, and discussion of virtually every XML related topic The official XML specification, links to XML tools, and information about related standards A list of software, written in various languages, which is useful in creating, parsing, and manipulating XML. The authoritative site for Unicode and information about representing various characters in XML A magazine devote to all things XML Information about XML Schema and related technologies Norman Walsh, XML Standards Engineer in Sun’s Technology Development Group, is chair of the OASIS DocBook Technical Committee. He can be reached at norman.walsh@sun.com. Right Sizing Blades for the Midmarket Five Myths About Blade Servers Master of Puppet: System Management Made Easy Everything You Need - How blades grow, protect and simplify Alanna Dwyer Talks About Shorty
http://www.linux-mag.com/id/823
crawl-002
refinedweb
2,282
54.32
comp.text.xml Google Group The Extensible Markup Language (XML). 2009-07-05T18:15:25Z Google Groups shaz shoaib...@gmail.com 2009-07-05T18:15:25. C. M. Sperberg-McQueen cms...@acm.org 2009-07-04T21:37:48Z Re: Extend and element from another namespace Is it possible? Well, it may be possible. A good deal depends <br> on how B:someElement and its type are defined. Several cases are <br> worth distinguishing. <br> First, if the type of B:someElement has a wildcard allowing <br> elements in namespace A to appear, then all you need to do is <br> write a schema document for namespace A, which declares C. M. Sperberg-McQueen cms...@acm.org 2009-07-03T16:27:28Z Re: Help deciphering use of square brackets within translate function The second argument to translate() is interpreted as a set of <br> characters, not as an expression. The first argument is scanned <br> character by character, and each character is tested to see if it <br> appears anywhere in the second argument. If it does not appear in the <br> second argument, it's copied to the output string without change. If IRQ i...@nospam.wp.pl 2009-07-03T13:20:56Z Re: XSL_FO Large columns amounts Peter Flynn pisze: <br> Hi <br> Thank You Peter for reply. <br> I consider using LaTex format(script), but I want know: Can i make <br> document with large table with descripted behavior in xsl_fo format too? <br> Best Regards, <br> Ireneusz Michalak Joe Kesselman keshlam.cat.nos...@verizon.net 2009-07-03T13:03:14Z Re: How to *not* specify the order of occurrence? Hmmm. You're certainly an authority on the topic... but I'm confused by <br> why I didn't notice this before. I need to re-read that, clearly. Thanks. <br> > (If the order has no significance, <br> Agreed. But sometimes convincing the customer that laziness is not <br> sufficient justification for allowing reordering can be difficult. And C. M. Sperberg-McQueen cms...@acm.org 2009-07-03T02:29:37Z Re: How to *not* specify the order of occurrence? It's a bit late to be following up on this, probably too late <br> to help the OP. <br> But if the OP's description of the goal is correct (allow any <br> order, and mark elements required or optional) and the example <br> is characteristic (none of the elements in question appears more <br> than once), the xsd:all can do what appears to be required. C. M. Sperberg-McQueen cms...@acm.org 2009-07-03T02:12:19Z Re: How to write an XML schema that specifies an optional namespace in the XML docs? One way to reduce the risk would be to exploit what is sometimes <br> called 'chameleon include', to include the same schema document <br> twice, once to generate unqualified components and once to <br> generate components in the 'optional' target namespace. <br> To illustrate: <br> In document implicit_ns_1.xsd, declare no target namespace Martin Honnen mahotr...@yahoo.de 2009-07-02T11:14:43Z Re: client-side xslt with chunking? Note that Firefox 2.0 is quite old, it does not even support <br> exsl:node-set, you need at least 3.0 for that. <br> I am not familiar with docbook stylesheet and what exactly chunking is <br> in that environment but that an example for the solution mentioned in <br> the mailing list post is here: <br> <a target="_blank" rel=nofollow[link]</a> Tim Arnold tim.arn...@sas.com 2009-07-01T13:37:10Z client-side xslt with chunking? Hi, <br> I would have thought it was impossible to use client-side (browser) xslt <br> rendering with chunking, but this thread <br> <a target="_blank" rel=nofollow[link]</a> <br> makes me wonder. <br> I've got a docbook document that specifies <br> <?xml-stylesheet[link]</a> )Discount Nike Shox Classic <br> Shoes Suppliers <br> (paypal payment)( <a target="_blank" rel=nofollow[link]</a> )Discount Nike Shox Dendara <br> Trainer <br> (paypal payment)( <a target="_blank" rel=nofollow[link]</a> )Discount Nike Air Jordan 1 <br> Seller Paypal <br> Payment <br> (paypal payment)( <a target="_blank" rel=nofollow[link]</a> )Discount Nike Air Jordan 2 Peter Flynn peter.n...@m.silmaril.ie 2009-06-27T12:51:15Z Re: XSL_FO Large columns amounts IRQ wrote: <br> You may find this easier using LaTeX (use XSLT to create the LaTeX <br> code). You may get some help from GUST (<a target="_blank" rel=nofollow[link]</a>) <br> ///Peter IRQ i...@nospam.wp.pl 2009-06-25T06:37:21Z XSL_FO Large columns amounts Hi, <br> I want do xsl_fo documents exactly PDF document. <br> I have much columns and i need divide it between sheets of paper. But <br> best solution is dynamically move columns which not contain on sheet and <br> move to next sheet. <br> row on one paper sheet must be correlated with row on other paper sheet. <br> e.g. When current row on one columns have five rows of letter this row The Magpie use...@pigsinspace.co.uk 2009-06-24T14:26:12Z Re: CreativeCommons RDF Permission vs. Prohbition? From what you say, Andy, I'd say you got to the same point I did with <br> my own published XML schemas and submissions for Dublin Core and I <br> suspect your best answer will be the same as I opted for - hire a <br> lawyer to make sure it does what you need. Andy Dingley ding...@codesmiths.com 2009-06-23T15:36:12Z Re: CreativeCommons RDF Permission vs. Prohbition? Tried that <br> <a target="_blank" rel=nofollow[link]</a> <br> Still can't find the RDF / ccREL expression of GPL, but it would look <br> strangely at home on that page. Andy Dingley ding...@codesmiths.com 2009-06-23T15:20:24Z Re: CreativeCommons RDF Permission vs. Prohbition? It's about both, but mostly about the piece in the middle, the <br> _schema_ that the CC project have put together and that they use to <br> describe their own licences. My work is about software, so the CC <br> licences aren't a good choice for it. Besides which, I have to deal <br> with the 20 or 30 different licences that have already been applied to
http://groups.google.com/group/comp.text.xml/feed/msgs.xml
crawl-002
refinedweb
1,048
62.88
A number of test tools are supplied with the MSMQ-MQSeries bridge. These tools help validate the setup of the bridge and are run from the command line. You'll use these tools to first validate the MSMQ to WebSphere MQ bridging options, and then to validate the reverse. To test while still within a command prompt on the bridge server, run the following command: MQSRRECV STQM LOCAL.STOCKPURCHASES This tool actively listens to incoming queue messages on the WebSphere MQ server. The parameters specify the name of the queue manager and the local queue name . This tool and the others you'll use for testing in this section can be found in the C:\Program Files\Host Integration Server\System directory, in case this directory isn't on the System PATH . If successful, the following message should be displayed: Use <CTL-C> to stop ! If you don't receive this message, check that the channel definitions and scripts have been configured as outlined in this chapter. Also, ensure that the WebSphere MQ server is up and running and that the queue manager has been started. If this is successful, open the MSMQ-MQSeries Bridge Manager. If not already running, start the service by right-clicking the Bridge Service and selecting Start. Do the same for the CN (WMQ_CN) site. Ensure that the two message pipes (MQS->MSMQ and MSMQ->MQS) are started. This is indicated with a green "play button" icon next to the service, as shown in Figure 10.21. Now that the test tool is listening for incoming messages on the queue, open another command prompt window. From there, type the following command: MSMQSEND STQM\LOCAL.STOCKPURCHASES This will send 10 test messages with the same name to the MSMQ queue. The following should be displayed in this window. (Note that the time information will be different.) Test Message 0 20:18:02 Test Message 1 20:18:02 Test Message 2 20:18:02 Test Message 3 20:18:02 Test Message 4 20:18:02 Test Message 5 20:18:02 Test Message 6 20:18:02 Test Message 7 20:18:02 Test Message 8 20:18:02 Test Message 9 20:18:02 Now switch to the other command prompt window running the MQSRRECV command. If the bridge was able to successfully route these messages, this same text will appear in this window too. To reverse the test (sending from WebSphere MQ to MSMQ), perform the following steps. Stop the MQSRRECV command by pressing CTRL+C. In this window, type the following command, replacing MQBRIDGE1 with the name of your bridge server: MSMQRECV MQBRIDGE1\LOCAL.STOCKSALES This will enable your machine to listen to the incoming MSMQ queue. Now switch to the other command prompt window (used to send the messages earlier) and type: MQSRSEND STQM STQM LOCAL.STOCKSALES If all is successful, the test should now display messages that have been transferred the other way, such as these: Test Message 0 20:23:15 Test Message 1 20:23:15 Test Message 2 20:23:15 Test Message 3 20:23:15 Test Message 4 20:23:15 Test Message 5 20:23:15 Test Message 6 20:23:15 Test Message 7 20:23:15 Test Message 8 20:23:15 Test Message 9 20:23:15 If the tests are unsuccessful , revisit the setup instructions to validate that you performed them correctly. The MSMQ-MQSeries bridge contains a tracing tool that can be used to debug messages that aren't correctly sent between the two queuing products and writes numerous events to the Event Log. Tracing can be enabled by using the Trace Initiator and Trace Viewer, two tools in the Applications And Tools folder of the Host Integration Server 2000 Programs Group. If these tests are successful, congratulations are in order! The bridge is now fully configured to route messages. Before proceeding, stop the MSMQRECV process by pressing CTRL+C in the appropriate command prompt window. To further show how the MSMQ-MQSeries bridge works, you can take the code samples shown in Chapters 8 and 9 and apply them to this setup. This will show how a client using the Microsoft .NET Messaging namespace ( System.Messaging ) and a client using IBM's libraries for Java can be used to exchange messages. The sample code to show this can be found in the C:\Interoperability\Samples\Resource\MSMQBridge directory. This directory contains two subdirectories, dotNET and Java. In the .NET client, notice how the queues are configured by using the queues available in Active Directory. (Also, note that you'll have to modify the code to replace all instances of example server names with the names of the servers in your actual setup.) MessageQueue purchasesMQ = new MessageQueue(@"STQM\LOCAL.STOCKPURCHASES"); MessageQueue salesMQ = new MessageQueue(@"MQBRIDGE1\LOCAL.STOCKSALES"); Because this code references queues in Active Directory, the client machine that runs this sample should also reside within Active Directory. To facilitate this, these tests can be run on the bridge. For the Java client, a number of properties are configured to reference the queue on the WMQ1 server, as shown here. (Again, you'll need to change this name to the corresponding server name in your setup.) String hostName = "WMQ1"; int port = 1414; String channel = "SYSTEM.DEF.SVRCONN"; String qmName = "STQM"; String purchasesQName = "LOCAL.STOCKPURCHASES"; String salesQName = "LOCAL.STOCKSALES"; Hashtable props = new Hashtable(); props.put(MQC.HOST_NAME_PROPERTY,hostName); props.put(MQC.PORT_PROPERTY,new Integer(port)); props.put(MQC.CHANNEL_PROPERTY,channel); If you're accessing the queue remotely (and not running the sample code on the WebSphere MQ box), ensure that the WebSphere MQ client is installed on the client machine. Build and run the .NET client sample code. Enter nant at a command prompt within the dotNET subdirectory to build the sample, and enter client to run it. Upon running, the .NET client will display the following: Test message has been placed on the purchases queue. This should be bridged to Websphere MQ The .NET client places a message on the local purchases queue, which uses MSMQ and will then pause. The MSMQ-MQSeries bridge delivers this message to the corresponding queue on the WebSphere MQ server. Now run the Java client in a separate command prompt window. Enter ant run at a command prompt within the Java subdirectory to build and run the Java client. If successful, you should notice a number of operations. First, the Java client places a new message on the local sales queue, which uses WebSphere MQ: Test message has been placed on the sales queue. This should be bridged to MSMQ The MSMQ-MQSeries bridge will take this message and deliver it to the corresponding MSMQ queue. Next, the Java client picks up the message that has been delivered by the bridge from the .NET client: Message delivered by MSMQ-MQSeries Bridge: <?xml version="1.0"?> <string> This is a test purchase, sent using a .NET System.Messaging client! </string> Notice how the message is encapsulated in an XML document. We'll get to the reasoning behind this shortly. Before we do, switch back to the command prompt window that's running the .NET client. You should observe that the message that was sent by the Java client has been picked up (by the bridge moving it from the WebSphere MQ queue to MSMQ): Message delivered by MSMQ-MQSeries Bridge: This is a test sale, sent using a Websphere MQ client! As shown, the test sale sent by the Java client was successfully received. So, why the XML? Recall that in Chapter 8 you saw how the System.Messaging namespace was used by a .NET client to send a message to an instance of MSMQ. You might remember how messages sent to an MSMQ queue can be formatted either with an XML formatter or a binary formatter ”similar to the formatting options available in Microsoft .NET Remoting. With this example, we also must choose one of these options to send the message across the MSMQ-MQSeries bridge. Because there's no binary formatter on the Java 2 platform that's compatible ”see Chapter 3, "Exchanging Data Between .NET and Java," for more details ”the XML formatter is used. By specifying this formatter, the text string message that the .NET client sends is serialized into an XML document, which is the result displayed when the Java client reads the message. For returning a message in an equivalent XML format, the Java client uses the following line: msg.writeString("<?xml version=\"1.0\"?><string>This is a test sale," +"sent using a Websphere MQ client!</string>"); As you can see, the string message is simply contained within an XML document. Although this is fine for our test string message, in a production environment you'll want to avoid constructing your own XML documents in strings. One way for the Java client to create messages that the .NET client can understand is to use the XML serialization techniques, as discussed in Chapter 3. Because the .NET client is using the XmlSerializer classes to construct messages, the XML classes within the GLUE toolkit allow the messages to be deserialized to objects that the Java platform can understand. When sending messages back to the .NET client, the same libraries can be used to construct an XML document in the correct format.
https://flylib.com/books/en/1.62.1.73/1/
CC-MAIN-2020-29
refinedweb
1,562
64.2
The Spring Data project was coined at Spring One 2010 and originated from a hacking session of Rod Johnson (SpringSource) and Emil Eifrem (Neo Technologies) early that year. They were trying to integrate the Neo4j graph database with the Spring Framework and evaluated different approaches. The session created the foundation for what would eventually become the very first version of the Neo4j module of Spring Data, a new SpringSource project aimed at supporting the growing interest in NoSQL data stores, a trend that continues to this day. Spring has provided sophisticated support for traditional data access technologies from day one. It significantly simplified the implementation of data access layers, regardless of whether JDBC, Hibernate, TopLink, JDO, or iBatis was used as persistence technology. This support mainly consisted of simplified infrastructure setup and resource management as well as exception translation into Spring’s DataAccessExceptions. This support has matured over the years and the latest Spring versions contained decent upgrades to this layer of support. The traditional data access support in Spring has targeted relational databases only, as they were the predominant tool of choice when it came to data persistence. As NoSQL stores enter the stage to provide reasonable alternatives in the toolbox, there’s room to fill in terms of developer support. Beyond that, there are yet more opportunities for improvement even for the traditional relational stores. These two observations are the main drivers for the Spring Data project, which consists of dedicated modules for NoSQL stores as well as JPA and JDBC modules with additional support for relational databases. Although the term NoSQL is used to refer to a set of quite young data stores, all of the stores have very different characteristics and use cases. Ironically, it’s the nonfeature (the lack of support for running queries using SQL) that actually named this group of databases. As these stores have quite different traits, their Java drivers have completely different APIs to leverage the stores’ special traits and features. Trying to abstract away their differences would actually remove the benefits each NoSQL data store offers. A graph database should be chosen to store highly interconnected data. A document database should be used for tree and aggregate-like data structures. A key/value store should be chosen if you need cache-like functionality and access patterns. With the JPA, the Java EE (Enterprise Edition) space offers a persistence API that could have been a candidate to front implementations of NoSQL databases. Unfortunately, the first two sentences of the specification already indicate that this is probably not working out: This document is the specification of the Java API for the management of persistence and object/relational mapping with Java EE and Java SE. The technical objective of this work is to provide an object/relational mapping facility for the Java application developer using a Java domain model to manage a relational database. This theme is clearly reflected in the specification later on. It defines concepts and APIs that are deeply connected to the world of relational persistence. An @Table annotation would not make a lot of sense for NoSQL databases, nor would @Column or @JoinColumn. How should one implement the transaction API for stores like MongoDB, which essentially do not provide transactional semantics spread across multidocument manipulations? So implementing a JPA layer on top of a NoSQL store would result in a profile of the API at best. On the other hand, all the special features NoSQL stores provide (geospatial functionality, map-reduce operations, graph traversals) would have to be implemented in a proprietary fashion anyway, as JPA simply does not provide abstractions for them. So we would essentially end up in a worst-of-both-worlds scenario—the parts that can be implemented behind JPA plus additional proprietary features to reenable store-specific features. This context rules out JPA as a potential abstraction API for these stores. Still, we would like to see the programmer productivity and programming model consistency known from various Spring ecosystem projects to simplify working with NoSQL stores. This led the Spring Data team to declare the following mission statement: Spring Data provides a familiar and consistent Spring-based programming model for NoSQL and relational stores while retaining store-specific features and capabilities. So we decided to take a slightly different approach. Instead of trying to abstract all stores behind a single API, the Spring Data project provides a consistent programming model across the different store implementations using patterns and abstractions already known from within the Spring Framework. This allows for a consistent experience when you’re working with different stores. A core theme of the Spring Data project available for all of the stores is support for configuring resources to access the stores. This support is mainly implemented as XML namespace and support classes for Spring JavaConfig and allows us. Most of the NoSQL Java APIs do not provide support to map domain objects onto the stores’ data abstractions (documents in MongoDB; nodes and relationships for Neo4j). So, when working with the native Java drivers, you would usually have to write a significant amount of code to map data onto the domain objects of your application when reading, and vice versa on writing. Thus, a very core part of the Spring Data modules is a mapping and conversion API that allows obtaining metadata about domain classes to be persistent and enables the actual conversion of arbitrary domain objects into store-specific data types. On top of that, we’ll find opinionated APIs in the form of template pattern implementations already well known from Spring’s JdbcTemplate, JmsTemplate, etc. Thus, there is a RedisTemplate, a MongoTemplate, and so on. As you probably already know, these templates offer helper methods that allow us. These features already provide us with a toolbox to implement a data access layer like we’re used to with traditional databases. The upcoming chapters will guide you through this functionality. To ease that process even more, Spring Data provides a repository abstraction on top of the template implementation that will reduce the effort to implement data access objects to a plain interface definition for the most common scenarios like performing standard CRUD operations as well as executing queries in case the store supports that. This abstraction is actually the topmost layer and blends the APIs of the different stores as much as reasonably possible. Thus, the store-specific implementations of it share quite a lot of commonalities. This is why you’ll find a dedicated chapter (Chapter 2) introducing you to the basic programming model. Now let’s take a look at our sample code and the domain model that we will use to demonstrate the features of the particular store modules. To illustrate how to work with the various Spring Data modules, we will be using a sample domain from the ecommerce sector (see Figure 1-1). As NoSQL data stores usually have a dedicated sweet spot of functionality and applicability, the individual chapters might tweak the actual implementation of the domain or even only partially implement it. This is not to suggest that you have to model the domain in a certain way, but rather to emphasize which store might actually work better for a given application scenario. At the core of our model, we have a customer who has basic data like a first name, a last name, an email address, and a set of addresses in turn containing street, city, and country. We also have products that consist of a name, a description, a price, and arbitrary attributes. These abstractions form the basis of a rudimentary CRM (customer relationship management) and inventory system. On top of that, we have orders a customer can place. An order contains the customer who placed it, shipping and billing addresses, the date the order was placed, an order status, and a set of line items. These line items in turn reference a particular product, the number of products to be ordered, and the price of the product. The sample code for this book can be found on GitHub. It is a Maven project containing a module per chapter. It requires either a Maven 3 installation on your machine or an IDE capable of importing Maven projects such as the Spring Tool Suite (STS). Getting the code is as simple as cloning the repository: $ cd ~/dev $ git clone Cloning into 'spring-data-book'... remote: Counting objects: 253, done. remote: Compressing objects: 100% (137/137), done. Receiving objects: 100% (253/253), 139.99 KiB | 199 KiB/s, done. remote: Total 253 (delta 91), reused 219 (delta 57) Resolving deltas: 100% (91/91), done. $ cd spring-data-book You can now build the code by executing Maven from the command line as follows: $ mvn clean package This will cause Maven to resolve dependencies, compile and test code, execute tests, and package the modules eventually. STS ships with the m2eclipse plug-in to easily work with Maven projects right inside your IDE. So, if you have it already downloaded and installed (have a look at Chapter 3 for details), you can choose the Import option of the File menu. Select the Existing Maven Projects option from the dialog box, shown in Figure 1-2. In the next window, select the folder in which you’ve just checked out the project using the Browse button. After you’ve done so, the pane right below should fill with the individual Maven modules listed and checked (Figure 1-3). Proceed by clicking on Finish, and STS will import the selected Maven modules into your workspace. It will also resolve the necessary dependencies and source folder according to the pom.xml file in the module’s root directory. You should eventually end up with a Package or Project Explorer looking something like Figure 1-4. The projects should compile fine and contain no red error markers. The projects using Querydsl (see Chapter 5 for details) might still carry a red error marker. This is due to the m2eclipse plug-in needing additional information about when to execute the Querydsl-related Maven plug-ins in the IDE build life cycle. The integration for that can be installed from the m2e-querydsl extension update site; you’ll find the most recent version of it at the project home page. Copy the link to the latest version listed there (0.0.3, at the time of this writing) and add it to the list of available update sites, as shown in Figure 1-5. Installing the feature exposed through that update site, restarting Eclipse, and potentially updating the Maven project configuration (right-click on the project→Maven→Update Project) should let you end up with all the projects without Eclipse error markers and building just fine. IDEA is able to open Maven project files directly without any further setup needed. Select the Open Project menu entry to show the dialog box (see Figure 1-6). The IDE opens the project and fetches needed dependencies. In the next step (shown in Figure 1-7), it detects used frameworks (like the Spring Framework, JPA, WebApp); use the Configure link in the pop up or the Event Log to configure them. The project is then ready to be used. You will see the Project view and the Maven Projects view, as shown in Figure 1-8. Compile the project as usual. Next you must add JPA support in the Spring Data JPA module to enable finder method completion and error checking of repositories. Just right-click on the module and choose Add Framework. In the resulting dialog box, check JavaEE Persistence support and select Hibernate as the persistence provider (Figure 1-9). This will create a src/main/java/resources/META-INF/persistence.xml file with just a persistence-unit setup. No credit card required
https://www.oreilly.com/library/view/spring-data/9781449331863/ch01.html
CC-MAIN-2019-47
refinedweb
1,968
51.58
Build a fully functional (non-violent) Squid Games Doll that plays red-light-green-light with you. Things used in this project Hardware components Software apps and online services Hand tools and fabrication machines 3D Printer (generic) Story Built a fully functional squid games doll. She plays the red-light-green-light game with you. Complete with rotating head, colored eyes, and she talks! She uses ultra-sonic and motion detection to determine if you win or lose. But don't worry, if you lose she just asks if you want to play again. Watch the video and let me know what you think. I used every pin on the Arduino UNO! Which I've never done before so this was an achievement for myself. This project took me 3 weeks to build with 1 week dedicated entirely to printing! It took me 6 days to print this doll. 1 week for the build and another week to edit the video. ELEGOO sent me a free UNO kit if I make them a video so this is why I built the doll. It was either this or build an escape room. I'm happy they chose this project. I hope people enjoy it because it was fun build that came out looking really nice and creeps out a bunch of people. But more importantly, it works. Here are all the parts I will use for this build. 1. Start Printing Printing is going to take a long time. It took me 6 days to print the entire doll out. I also used different color filament so that I can reduce the amount of painting. I remixed a model I found on thingiverse.com, hollowed out the center, and added access holes for the electronics. I also modified the chest plate for the Servo and Ultra Sonic to be mounted. 2. Nobody like painting Time to paint. I used generic spray paint for this. I painted the inside of the dolls head (masked off the eyes) so that the LEDs for the eyes will not make the entire face glow. Although this might be the effect you are looking for. I wanted just the eyes to glow. 3. Magnets attract but glue sticks One way to attach all of the doll's limbs is to melt magnets into the plastic. This is if you want to be able to take her apart. If I were to do this project again I would probably just glue all the limbs on her. As I see it now, there is little advantage to use magnets aside from she can fit into a smaller box for storage if you want. Only thing you should not attach is the head at this point. 4. Easy on the eyes Start with the easiest step, the eyes. I used tri-color LEDs for the eyes. As you know, you can mix and match RGB colors to get basic any color you'd like. I stuck with primary and secondary colors so I didn't have to PWM the signals. But you can if you are looking for that. The longest pin is the ground, that will be pin 2. Connect the LED as pictured using 220ohm resistors for each lead aside from the ground. For mounting, I simply hot glued the LEDs as close to the center of the eyes as I could get but on the reverse side. Be sure to long enough wire to pass down the neck and into the lower part of her body. 5. LCD Menu The next easiest component is the 16x2 LCD screen. You should use the LCD screen with an I2C adapter. It will make your life much easier and reduces the IO count from 6 to 2. Once this is connected, the LCD should startup with "Welcome to the squid games!" on the display. For mounting, I printed out a 1mm thick circle. I make this thin so that I can mold it to the dolls back with a heat gun. This is much easier than figuring out the contours of her back ( at least for me ). I installed threaded inserts for the display with nuts on the revers side to secure the display and the display mount to the body. 6. Only owl heads rotate 180 degrees The servo was difficult for one main reason, I don't use the servo library. I know that sounds weird but I had to use the timer1 for the 4 digit display update and the servo library also uses this. Luckily, the servo is either 0 degrees or 180 degrees and there is no in between making this a lot easier. Timer1 is setup for.5ms intervals, 2000hz. The servo period is 20ms. At 0 degrees the pin only need to be high for 2 counts and low the rest of the period. For 180 degrees the pin needs to be high for 4 counts and low the rest of the time. There is a nice mount on the chest plate for the servo. You can screw it into place or glue it into place. I used epoxy to secure the servo to the chest plate because it will also add strength to the chest plate and hopefully prevent it from damage. 7. Sounds like a bat Next we will install the ultra sonic distance module. I have this updating every 250ms. It also has a nice mounting location on the chest plate. There are only 2 wires for this module. I used epoxy to mount the ultra sonic to the chest plate. 8. No strings attached The IR sensor for the remote is only needed if you want to control the game play. I thought this would be fun but don't really use this mode, automatic game play is fun enough. I chose to mount the IR sensor inside a clip on the doll's hair. You can obviously choose to place it somewhere else. I was trying to hid it but maybe there is a better place because the IR doesn't always see the remote when she turns her head and the sensor is on the other side. 9. Time to Time Next we will setup the timer display. This is a lot of work for a 4 digit display. I will include the connection diagram from ELEGOO. The game play is only up to 5 minutes so I also removed the use of the most significant digit. But you an decide to keep it if you have the IO pin available. To update the display you have to cycle the LED very quickly because you can only have one digit active at a time. This is why they seem to flicker when watched through a camera. I used a 2ms refresh rate which is fast enough that you can not see the flicker. At 5ms I can start to see it flicker when looking at the display in your peripheral vision. In addition, you will need the shift register 74HC595. Mounting the display what not fun. I decided it was best to integrate the display into her belt. The original doll in Squid Games does not have a belt of course, but sacrifices had to be made to get this display on her. If you choose this route too, mask off a square the same size of the display then cut out with a Dremel. I then used epoxy putty to add a gradual transition to the display. But this was not needed, I just thought it looked better this way. I mounted the 74HC595 to the prototype shield, otherwise you will have wires going all over the place. An alternative solution is to use a different timer display that has a more convenient communication with less pins. 10. I saw you move The motion detector is a weird little guy. This thing uses infrared to detect movement. One thing I learned is that this sensor needs time to warm up. On startup it needs 1 minute to warm up. That is why there is a 1 minute startup time for the doll. Another annoyance with this module is that the fastest it can update a movement detection is about 5 seconds. The last annoyance is how sensitive this sensor is. Even with the sensitivity turned all the way down, it still can see the smallest of movements and sometimes movement that I don't even know what it is talking about. To help prevent these "false positives" I mounted the sensor inside a horse blinder box. The box has a small hole (7mm) for the motion detector to look out. As a bonus, this prevents you from having to mount this giant sensor on the outside of the doll. The motion sensor only has one binary wire for feedback, motion or not. To mount the sensor, I printed out the horse blinder and glued it to the inside of the doll. I then drilled a hole through the body. I used threaded insert on the blinder box to secure the motion sensor. 11. Don't push my buttons Finally, we are at the buttons. If you have the extra I/O pins, it is easier to connect each of these to a digital input. But I did not have this luxury for the UNO. Instead I had to use an analog input to read the resistor values to determine which button was being pressed. The values I used were, 1K, 2K, and 5K. Then I had a 220 Ohm resistor to pull the analog input low. Otherwise it will float and get random button presses. I mounted the buttons on the same mounting plate as the LCD. This was not easy but I didn't have a better way. Soldering the wires onto these buttons then getting them to pass through little holes drilled in the plastic will test your patients. 12. Can you hear me now? Last step and probably the most important is the sound module. This will use the serial port on the UNO so you must add 1K Ohm resistors to the Tx and Rx pins otherwise, you will get blocked from programming the UNO after this connection is made. In addition, you will need to use the "busy" pin so that the UNO knows that a sounds is already playing. This is very important if you have MP3s play back-to-back. I mounted the MP3 player module on the prototype shield. This shield makes mounting components like this very convenient because it then just plugs into the UNO. This module will need an 8ohm speaker and has an output of 3W. The speaker was just glued down to the base of the doll. I drilled small holes under the speaker for the sound to come out better. 13. Mount the UNO Install the UNO onto the platform and plug the prototype shield onto the UNO. Be sure that you have labeled all of the wires, if not you probably don't know where any of them go by now. With a little bit of negotiation, you can get the mounted UNO inside the doll with all the wires connected. I used threaded inserts to mount the platform to the bottom of the doll. 14. Test Fix Test This is when you get to put your debugging hat on. I can tell you the software is working on GitHub so at least that is one less thing to debug. But go ahead anyway if you have doubts and send me any updates you find. 15. Let's play Time to test her out and play a game. Here is how the game is programmed. On startup she turns her head forward. The motion sensor take a full minute to startup. So there is a timer when it starts. Half way through she giggles and turns her head around. Then announces when she is ready. Depending on if you have the game set to remote she says different things. In Auto mode she asks you to press the play button. In my case, this is the far right button. In remote mode she will ask you to press the power button when you are ready. Then press the play button to toggle to red light or green light. So when you are ready, press the go button and she will give you 10 seconds to get in place. Usually someone else nearby will press this button. Then the game begins. She will start with Green light. For green light you have to get within 50cm to trigger a win. If you are within 100cm she will say indicate that you are getting closer. Green light is only using the sonar. For red light the motion sensor and the distance sensor is being used. If you move enough for the motion sensor to trip or if you move more than 10cm forward, you will loose the game. You will also loose the game if time runs out. She will remind you that time is almost out at 5 seconds left. The last cool feature is that she will also speak in the Korean voice for the red light. This is a menu feature. Press the far left button to toggle the menu item, and the center button to toggle the item options. 16. Watch Video This video took me a long time to edit. I have probably 30 hours in just editing. But it was fun making it. I think it came out good and is funny but want you to see for yourself. Please let me know what you think and if you have any questions. Thank You! Schematics Wire diagram This is how I connected all of the components to the UNO. The project repo All of the files for this build are stored here. Code Squid Game Doll Sketch C/C++ This will control all of the sensor and the game logic /// CodeMakesItGo Dec 2021 #include <DFPlayerMini_Fast.h> #include <FireTimer.h> #include <IRremote.h> #include <LiquidCrystal_I2C.h> #include <SoftwareSerial.h> #include <SR04.h> #include <Wire.h> /*-----( Analog Pins )-----*/ #define BUTTONS_IN A0 #define SONAR_TRIG_PIN A1 #define SONAR_ECHO_PIN A2 #define MOTION_IN A3 /*-----( Digital Pins )-----*/ #define LED_BLUE 13 #define LED_GREEN 12 #define LED_RED 11 #define SEGMENT_DATA 10 // DS #define SEGMENT_CLOCK 9 // SHCP #define SEGMENT_LATCH 8 // STCP #define SEGMENT_1_OUT 7 #define SEGMENT_2_OUT 6 #define SEGMENT_3_OUT 5 #define IR_DIGITAL_IN 4 // IR Remote #define SERVO_OUT 3 #define DFPLAYER_BUSY_IN 2 /*-----( Configuration )-----*/ #define TIMER_FREQUENCY 2000 #define TIMER_MATCH (int)(((16E+6) / (TIMER_FREQUENCY * 64.0)) - 1) #define TIMER_2MS ((TIMER_FREQUENCY / 1000) * 2) #define VOLUME 30 // 0-30 #define BETTER_HURRY_S 5 // play clip at 5 seconds left #define WIN_PROXIMITY_CM 50 // cm distance for winner #define CLOSE_PROXIMITY_CM 100 // cm distance for close to winning #define GREEN_LIGHT_MS 3000 // 3 seconds on for green light #define RED_LIGHT_MS 5000 // 5 seconds on for green light #define WAIT_FOR_STOP_MOTION_MS 5000 // 5 seconds to wait for motion detection to stop /*-----( Global Variables )-----*/ static unsigned int timer_1000ms = 0; static unsigned int timer_2ms = 0; static unsigned char digit = 0; // digit for 4 segment display static int countDown = 60; // Start 1 minute countdown on startup static const int sonarVariance = 10; // detect movement if greater than this static bool gameInPlay = false; static bool faceTree = false; static bool remotePlay = false; // 0 , 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, NULL const unsigned char numbers[] = x00}; const char *MenuItems[] = {"Language", "Play Time", "Play Type"}; typedef enum { LANGUAGE, PLAYTIME, PLAYTYPE, MENUITEM_COUNT } MenuItemTypes; const char *Languages[] = {"English", "Korean"}; typedef enum { ENGLISH, KOREAN, LANUAGE_COUNT } LanguageTypes; static int language = 0; const char *PlayTime[] = {"300", "240", "180", "120", "60", "30", "15"}; typedef enum { PT300, PT240, PT180, PT120, PT60, PT30, PT15, PLAYTIME_COUNT } PlayTimeTypes; const int playTimes[] = {300, 240, 180, 120, 60, 30, 15}; static int playTime = 0; const char *PlayType[] = {"Auto", "Remote"}; typedef enum { AUTO, REMOTE, PLAYTYPE_COUNT } PlayTypeTypes; static int playType = 0; typedef enum { BLACK, RED, GREEN, BLUE, WHITE, YELLOW, PURPLE } EyeColors; EyeColors eyeColor = BLACK; typedef enum { WARMUP, WAIT, READY, GREENLIGHT, REDLIGHT, WIN, LOSE } GameStates; static GameStates gameState = WARMUP; /*-----( Class Objects )-----*/ FireTimer task_50ms; FireTimer task_250ms; DFPlayerMini_Fast dfPlayer; SR04 sonar = SR04(SONAR_ECHO_PIN, SONAR_TRIG_PIN); IRrecv irRecv(IR_DIGITAL_IN); decode_results irResults; LiquidCrystal_I2C lcdDisplay(0x27, 16, 2); // 16x2 LCD display /*-----( Functions )-----*/ void translateIR() // takes action based on IR code received { switch (irResults.value) { case 0xFFA25D: Serial.println("POWER"); if (gameState == WAIT) { gameInPlay = true; } break; case 0xFFE21D: Serial.println("FUNC/STOP"); break; case 0xFF629D: Serial.println("VOL+"); break; case 0xFF22DD: Serial.println("FAST BACK"); break; case 0xFF02FD: Serial.println("PAUSE"); remotePlay = !remotePlay; break; case 0xFFC23D: Serial.println("FAST FORWARD"); break; case 0xFFE01F: Serial.println("DOWN"); break; case 0xFFA857: Serial.println("VOL-"); break; case 0xFF906F: Serial.println("UP"); break; case 0xFF9867: Serial.println("EQ"); break; case 0xFFB04F: Serial.println("ST/REPT"); break; case 0xFF6897: Serial.println("0");; case 0xFFFFFFFF: Serial.println(" REPEAT"); break; default: Serial.println(" other button "); } } bool isPlayingSound() { return (digitalRead(DFPLAYER_BUSY_IN) == LOW); } void updateTimeDisplay(unsigned char digit, unsigned char num) { digitalWrite(SEGMENT_LATCH, LOW); shiftOut(SEGMENT_DATA, SEGMENT_CLOCK, MSBFIRST, numbers[num]); // Active LOW digitalWrite(SEGMENT_1_OUT, digit == 1 ? LOW : HIGH); digitalWrite(SEGMENT_2_OUT, digit == 2 ? LOW : HIGH); digitalWrite(SEGMENT_3_OUT, digit == 3 ? LOW : HIGH); digitalWrite(SEGMENT_LATCH, HIGH); } void updateServoPosition() { static int servoPulseCount = 0; static bool lastPosition = false; // Only get new value at start of period if (servoPulseCount == 0) lastPosition = faceTree; if (!lastPosition) // 180 degrees { digitalWrite(SERVO_OUT, servoPulseCount < 5 ? HIGH : LOW); } else // 0 degrees { digitalWrite(SERVO_OUT, servoPulseCount < 1 ? HIGH : LOW); } servoPulseCount = (servoPulseCount + 1) % 40; // 20ms period } void updateMenuDisplay(const int button) { static int menuItem = 0; static int menuOption = 0; switch (button) { case 1: menuItem = (menuItem + 1) % MENUITEM_COUNT; if (menuItem == LANGUAGE) { menuOption = language; } else if (menuItem == PLAYTIME) { menuOption = playTime; } else if (menuItem == PLAYTYPE) { menuOption = playType; } else { menuOption = 0; } break; case 2: if (menuItem == LANGUAGE) { menuOption = (menuOption + 1) % LANUAGE_COUNT; language = menuOption; } else if (menuItem == PLAYTIME) { menuOption = (menuOption + 1) % PLAYTIME_COUNT; playTime = menuOption; } else if (menuItem == PLAYTYPE) { menuOption = (menuOption + 1) % PLAYTYPE_COUNT; playType = menuOption; } else { menuOption = 0; } break; case 3: if (gameState == WAIT) { gameInPlay = true; } if (gameState == GREENLIGHT || gameState == REDLIGHT) { gameInPlay = false; } default: break; } if (menuOption != -1) { lcdDisplay.clear(); lcdDisplay.setCursor(0, 0); lcdDisplay.print(MenuItems[menuItem]); lcdDisplay.setCursor(0, 1); if (menuItem == LANGUAGE) { lcdDisplay.print(Languages[menuOption]); } else if (menuItem == PLAYTIME) { lcdDisplay.print(PlayTime[menuOption]); } else if (menuItem == PLAYTYPE) { lcdDisplay.print(PlayType[menuOption]); } else { lcdDisplay.print("unknown option"); } } else { menuItem = 0; menuOption = 0; } } void handleButtons() { static int buttonPressed = 0; int value = analogRead(BUTTONS_IN); if (value < 600) // buttons released { if (buttonPressed != 0) updateMenuDisplay(buttonPressed); buttonPressed = 0; return; } else if (value < 700) { Serial.println("button 1"); buttonPressed = 1; } else if (value < 900) { Serial.println("button 2"); buttonPressed = 2; } else if (value < 1000) { Serial.println("button 3"); buttonPressed = 3; } else { Serial.println(value); buttonPressed = 0; } } static int lastSonarValue = 0; void handleSonar() { int value = sonar.Distance(); if (value > lastSonarValue + sonarVariance || value < lastSonarValue - sonarVariance) { Serial.println(value); lastSonarValue = value; } } static int lastMotion = 0; void handleMotion() { int value = digitalRead(MOTION_IN); if (value != lastMotion) { lastMotion = value; } if (lastMotion) Serial.println("Motion Detected"); } void handleLeds() { digitalWrite(LED_RED, eyeColor == RED || eyeColor == WHITE || eyeColor == PURPLE || eyeColor == YELLOW ? HIGH : LOW); digitalWrite(LED_GREEN, eyeColor == GREEN || eyeColor == WHITE || eyeColor == YELLOW ? HIGH : LOW); digitalWrite(LED_BLUE, eyeColor == BLUE || eyeColor == WHITE || eyeColor == PURPLE ? HIGH : LOW); } void handleRemote() { // have we received an IR signal? if (irRecv.decode(&irResults)) { translateIR(); irRecv.resume(); // receive the next value } } // Timer 1 ISR ISR(TIMER1_COMPA_vect) { // Allow this ISR to be interrupted sei(); updateServoPosition(); if (timer_1000ms++ == TIMER_FREQUENCY) { timer_1000ms = 0; countDown--; if (countDown < 0) { countDown = 0; } } if (timer_2ms++ == TIMER_2MS) { timer_2ms = 0; if (digit == 0) updateTimeDisplay(1, countDown % 10); if (digit == 1) updateTimeDisplay(2, (countDown / 10) % 10); if (digit == 2) updateTimeDisplay(3, (countDown / 100) % 10); if (digit == 3) updateTimeDisplay(4, 16); digit = ((digit + 1) % 4); } } void playGame() { static int sequence = 0; static long internalTimer = millis(); static bool closerClipPlayed = false; static bool hurryUpClipPlayed = false; static int captureDistance = 0; long currentTimer = internalTimer; if(isPlayingSound()) return; if (gameState == WARMUP) { // power up sound if (sequence == 0) { Serial.println("Warming Up"); dfPlayer.playFolder(1, 1); faceTree = false; eyeColor = YELLOW; sequence++; } // laugh at 30 else if (sequence == 1 && countDown <= 30) { Serial.println("Laughing"); dfPlayer.playFolder(1, 2); faceTree = true; sequence++; } else if (sequence == 2 && countDown <= 10) { Serial.println("Almost ready"); dfPlayer.playFolder(1, 3); sequence++; } else if (sequence == 3 && countDown == 0) { Serial.println("All ready, lets play"); dfPlayer.playFolder(1, 4); faceTree = false; sequence = 0; gameState = WAIT; gameInPlay = false; } } else if (gameState == WAIT) { currentTimer = millis(); if (gameInPlay) { gameState = READY; remotePlay = false; sequence = 0; } // Every 30 seconds else if (currentTimer - internalTimer > 30000 || sequence == 0) { internalTimer = millis(); if(playType == AUTO) { // press the go button when you are ready Serial.println("Press the go button when you are ready"); dfPlayer.playFolder(1, 5); } else { Serial.println("Press the power button on the remote when you are ready"); dfPlayer.playFolder(1, 6); } // eyes are blue eyeColor = BLUE; // facing players faceTree = false; gameInPlay = false; sequence++; } } else if (gameState == READY) { currentTimer = millis(); if (sequence == 0) { // get in position, game will start in 10 seconds Serial.println("Get in position."); dfPlayer.playFolder(1, 7); countDown = 10; // eyes are green eyeColor = WHITE; // facing players faceTree = false; sequence++; internalTimer = millis(); } else if (sequence == 1) { if (playType == REMOTE) { if (remotePlay) sequence++; } else sequence++; } else if (sequence == 2) { // at 0 seconds, here we go! if (countDown == 0) { countDown = playTimes[playTime]; Serial.print("play time set to "); Serial.println(countDown); Serial.println("Here we go!"); dfPlayer.playFolder(1, 8); gameState = GREENLIGHT; sequence = 0; } } } else if (gameState == GREENLIGHT) { currentTimer = millis(); if (sequence == 0) { // eyes are green eyeColor = GREEN; // play green light Serial.println("Green Light!"); dfPlayer.playFolder(1, 9); sequence++; } else if(sequence == 1) { // play motor sound dfPlayer.playFolder(1, 19); // facing tree faceTree = true; sequence++; internalTimer = millis(); } else if (sequence == 2) { // wait 3 seconds or until remote // switch to red light if (playType == AUTO && currentTimer - internalTimer > GREEN_LIGHT_MS) { sequence = 0; gameState = REDLIGHT; } else if (playType == REMOTE && remotePlay == false) { sequence = 0; gameState = REDLIGHT; } else { // look for winner button or distance if (gameInPlay == false || lastSonarValue < WIN_PROXIMITY_CM) { sequence = 0; gameState = WIN; } else if (countDown <= 0) { Serial.println("Out of Time"); dfPlayer.playFolder(1, 16); sequence = 0; gameState = LOSE; } // at 2 meters play "your getting closer" else if (lastSonarValue < CLOSE_PROXIMITY_CM && closerClipPlayed == false) { Serial.println("Getting closer!"); dfPlayer.playFolder(1, 11); closerClipPlayed = true; } // if less than 5 seconds play better hurry else if (countDown <= BETTER_HURRY_S && hurryUpClipPlayed == false) { Serial.println("Better Hurry"); dfPlayer.playFolder(1, 12); hurryUpClipPlayed = true; } } } } else if (gameState == REDLIGHT) { currentTimer = millis(); if (sequence == 0) { // eyes are red eyeColor = RED; Serial.println("Red Light!"); if(language == ENGLISH) { // play red light English dfPlayer.playFolder(1, 10); } else { // play red light Korean dfPlayer.playFolder(1, 18); } sequence++; } else if(sequence == 1) { // play motor sound dfPlayer.playFolder(1, 19); // facing players faceTree = false; // Save current distance captureDistance = lastSonarValue; sequence++; internalTimer = millis(); } else if (sequence == 2) { //wait for motion to settle if (lastMotion == 0 || (currentTimer - internalTimer) > WAIT_FOR_STOP_MOTION_MS) { internalTimer = millis(); sequence++; Serial.println("Done settling"); } Serial.println("Waiting to settle"); } else if (sequence == 3) { // back to green after 5 seconds if (playType == AUTO && currentTimer - internalTimer > RED_LIGHT_MS) { sequence = 0; gameState = GREENLIGHT; } else if (playType == REMOTE && remotePlay == true) { sequence = 0; gameState = GREENLIGHT; } else { // can't push the button while red light // detect movement // detect distance change if (gameInPlay == false || lastMotion == 1 || lastSonarValue < captureDistance) { Serial.println("Movement detected!"); dfPlayer.playFolder(1, 15); sequence = 0; gameState = LOSE; } if (countDown == 0) { Serial.println("Out of time"); dfPlayer.playFolder(1, 16); sequence = 0; gameState = LOSE; } } } } else if (gameState == WIN) { if (sequence == 0) { // play winner sound Serial.println("You Won!"); dfPlayer.playFolder(1, 13); // eyes are white eyeColor = WHITE; // facing players faceTree = false; sequence++; } else if (sequence == 1) { // wanna play again? Serial.println("Play Again?"); dfPlayer.playFolder(1, 17); gameInPlay = false; countDown = 0; // go to wait gameState = WAIT; sequence = 0; } } else if (gameState == LOSE) { if (sequence == 0) { // sorry better luck next time Serial.println("Sorry, you lost"); dfPlayer.playFolder(1, 14); // eyes are purple eyeColor = PURPLE; // face players faceTree = false; sequence++; } else if (sequence == 1) { // wanna play again? Serial.println("Play Again?"); dfPlayer.playFolder(1, 17); gameInPlay = false; countDown = 0; // go to wait gameState = WAIT; sequence = 0; } } else { //Shouldn't ever get here gameState = WARMUP; } } void loop() /*----( LOOP: RUNS CONSTANTLY )----*/ { if (task_50ms.fire()) { handleRemote(); handleButtons(); } if (task_250ms.fire()) { handleSonar(); handleMotion(); handleLeds(); playGame(); Serial.println(isPlayingSound()); } } // Setup Timer 1 for 2000Hz void setupTimer() { cli(); //stop interrupts TCCR1A = 0; // set entire TCCR1A register to 0 TCCR1B = 0; // same for TCCR1B TCNT1 = 0; //initialize counter value to 0 // set compare match register OCR1A = TIMER_MATCH; // = (16*10^6) / (2000*64) - 1 (must be <65536), 2000Hz // turn on CTC mode TCCR1B |= (1 << WGM12); // Set CS11 and CS10 bits for 64 prescaler TCCR1B |= (1 << CS11) | (1 << CS10); // enable timer compare interrupt TIMSK1 |= (1 << OCIE1A); sei(); //allow interrupts } void setup() { Serial.begin(9600); pinMode(MOTION_IN, INPUT); pinMode(BUTTONS_IN, INPUT); pinMode(DFPLAYER_BUSY_IN, INPUT); pinMode(SERVO_OUT, OUTPUT); pinMode(LED_RED, OUTPUT); pinMode(LED_GREEN, OUTPUT); pinMode(LED_BLUE, OUTPUT); pinMode(SEGMENT_LATCH, OUTPUT); pinMode(SEGMENT_CLOCK, OUTPUT); pinMode(SEGMENT_DATA, OUTPUT); pinMode(SEGMENT_1_OUT, OUTPUT); pinMode(SEGMENT_2_OUT, OUTPUT); pinMode(SEGMENT_3_OUT, OUTPUT); irRecv.enableIRIn(); // Start the receiver dfPlayer.begin(Serial); // Use the standard serial stream for DfPlayer dfPlayer.volume(VOLUME); // Set the DfPlay volume lcdDisplay.init(); // initialize the lcd lcdDisplay.backlight(); // Turn on backlight setupTimer(); // Start the high resolution timer ISR // Display welcome message lcdDisplay.setCursor(0, 0); lcdDisplay.print("Welcome to the"); lcdDisplay.setCursor(0, 1); lcdDisplay.print("Squid Games!"); // short delay to display welcome screen delay(1000); task_50ms.begin(50); // Start the 50ms timer task task_250ms.begin(250); // Start the 250ms timer task } The article was first published in hackster, December 7, 2021 cr: author: codemakesitgo
https://community.dfrobot.com/makelog-312148.html
CC-MAIN-2022-05
refinedweb
4,223
65.01
view raw I have this number : 666872700 666900000 Math.Round(666872700,4) 100000 100000 The documentation for Math.Round clearly states: Rounds a double-precision floating-point value to a specified number of fractional digits. So it rounds stuff behind the decimal separator, but not integral part. I know of no other way than dividing, rounding and then multiplying again. If you know a little C#, you can use the following extension method Jason Larke wrote in his answer to this question. I don't know whether it works, but you should be able to translate it to VB.NET and try it: public static class MathExtensions { public static int Round(this int i, int nearest) { if (nearest <= 0 || nearest % 10 != 0) throw new ArgumentOutOfRangeException("nearest", "Must round to a positive multiple of 10"); return (i + 5 * nearest / 10) / nearest * nearest; } }
https://codedump.io/share/5hgv2MlUwefw/1/mathround-in-vbnet-not-working-as-expected
CC-MAIN-2017-22
refinedweb
141
56.66
This is the fourth in a series of articles that hopefully will be educational to novices and helpful to more advanced developers alike. I hope to learn a lot from doing this. I am taking the approach of maintaining a journal of the day-to-day process of software development, starting from a concept and going to a product. I hope you enjoy this series of articles. We have been handed a project to create an issue tracking system (no longer just an application) targeted at software developers, standalone programmers to be exact. We have gathered a list of requirements and we have a vision to drive our project. Yesterday we created a simple preliminary specification for the application as a whole (not the individual components). We also created our UWEs and that brings us to what we need to do today. At this stage in the design process, we have done our research, racked our brains and formulated what we believe to be a solid working model for how we are going to fulfill this project. We have even put ourselves in the users shoes (so to speak) and challenged our model. We still don't have a design document, but we're not quite ready for that just yet. Our application model seems to be perfect for our target audience, but you can never be too sure. We must not rely on our intuition, it is time to get back with our users and find out what they think about our model. Today, we are going to create prototype UIs to illustrate the UWEs. We will use these prototypes when meeting with our users to get buy-in, validate our ideas and flush our any potential problems. Most of the time when we think of prototypes, we think of real applications that present a UI that simulates the appearance of our proposed final application. Often we will use VB, or for web apps, FrontPage, to quickly produce a simulated UI, planning to throw it away when we are done. We usually try to spend just enough time on these to make it look "good enough". We don't invest any time into error handling or making sure things are spelled right. Then we happily take our simulated applications to our users and present our ideas. Users however, typically don't have a clue about how software is developed. When they hear the word "prototype", they probably think about something like a prototype car from Detroit or something like that. In-fact, I can't think of an instance when I use the word "prototype" outside technical fields. Obviously, users just don't have the same meaning assigned to the word that developers do. Developers often fall into the trap of failing to recognize this gap and jump feet first into their own graves. The next stop on this train occurs the day you show the prototype to your manager, boss or other decision maker, and their response is "Wow! I knew you were fast, but not that fast. I guess this will be ready by Friday then. "All that effort put into designing a perfect application is thrown away by a mistake. What this means of course, is that we have identified a risk to project success. Just like the technical risks we identified on day 2, these risks have the potential to derail and destroy our project. We must take steps to understand these risks and be prepared to prevent the causes of this risk or at least limit our exposure to it. What is the best way to approach the problem that presenting prototypes to users can cause? Well, as with anything, there's more than one way to skin a ... um ..., I mean more than one way to accomplish something. Here are some of the strategies we can choose from: Which strategy is best? Well, it just depends on the project, the makeup of the users, the target audience, schedules, etc. Part of our job here is to choose an appropriate strategy. Lets examine each possible strategy a little. I can't think of a situation where this is a good idea. I just can't. Not only would this increase our risk (we could have wasted much of that design work if the users puke on our ideas), but I can't think of a reason why the prototype would be better then than now. Well, we might learn something new about this project, have an idea, or come across a new problem, but that is another risk we must weigh. Even considering the potential risk of us uncovering a new idea or problem, we must counter that with the cost of user mandated changes. Just what exactly does "semi-useful" mean? I think this is one of those terms of convenience that only has meaning to developers but means nothing to users. As far as users are concerned, semi-useful is the same as useless. Maybe that's a little harsh, but I suspect that most of the time, it is not. There is a concept called "tracer bullets" which I found in a book called "The Pragmatic Programmer". The general concept here is that, if we must present users with some form of interactive experience, we must be practical and realize that there is a high risk of having our project schedule smashed. Instead of a mock up, we go ahead and "frame-in" our applications, not leaving out error handling and other real-code stuff. I really don't care much for this concept, but if the situation demands it, it's better than no prototypes at all. This is the most common approach used to present prototypes to users and the one which many of us have seen fail over and over again. I don't think this method is necessarily a bad way to go about presenting our prototype, but it needs structure, control and a little luck. More importantly, when using this method, the developer needs to use his intuition to measure up the audience and weigh the risk. Is my audience likely to jump on this as a close-to-final product? Are they going to understand that I'm nowhere near finished? Will they be able to fill in the missing gaps (such as no implemented menu points, etc.)? The answer to these questions is largely dependent on how close to the technical side of development our audience is. It is also dependent on whether or not the developer has evangelized his audience on the nature of software development and whether or not it was well received. Be careful when taking this approach and realize that we developers see things that aren't there (like print outputs, help pages, etc.). Users typically don't possess this ability and are likely to look for these types of features because they are natural outcrops of how they use software, and what they expect. If you are going to take this approach, carefully consider how you are going to deal with missing features (and be realistic, don't overlook anything). Are you going to mock up every possible screen and output? Sanity says that this is almost always a terrible idea because if that's your approach, you might as well wait till the project is finished, because making all that stuff work is going to take a while. This is an often overlooked method that deserves much more attention. With this approach, you have the benefit of writing a prototype without the risk of overlooking features, or exposing the user to crashes or other unexpected behavior. You have total control over what you present and you can storyboard the screenshots for effect. This method is not always ideal, though. Be careful with this approach because you will most likely leave the screenshots with the audience. With an application, you will typically control where and when the program can be executed. You don't want to put so much on the page that users will be asking questions for days, nor do you want to show too little. Bare this in mind when taking those screenshots. Also, be highly selective in what you show. P.S. Another advantage to this method is that if we need to (for whatever reason) we can use Photoshop to cut and paste together GUI elements without having to make any special GUI components we don't already have. I have never used this method, but have seen it used. On one hand, it is nice to be able to see the application in action on screen. The programmer still controls what the user sees and how the screens are presented, which is a good thing. On the other hand, one of the problems I have with computer based prototypes is that users often have a hard time separating a computerized demo from the real thing. The advantage of paper is that it is totally disconnected from the computer and users seem to be better able to see this separation. One other concern with this method is that I suspect it will take longer to put this type of prototype together. I don't like spending much time on prototypes. Just enough to make it look good and look useable, but not more. At the other extreme of these options is the option of not presenting any prototypes at all. I don't usually like this method, but the risks of presenting a prototype may just be enough to warrant its use. Some examples of where this method might be appropriate are: Even if you decide that presenting a prototype(s) to the users is not a good idea, it might still be a good idea to put together some internal prototypes to test your ideas. Time to pick a prototype strategy. Lets see, our target audience is software developers, so that should help offset the risk of interactive prototypes. Also, this is a user mandated project which means they have a great deal of influence over this project (the spec even demands it). We feel good about our model at this point. This may just be one of the rare situations where an interactive prototype is acceptable. I normally lean heavily to the screenshot method, though, so be careful in your selection. After some more thought, though, I can't think of a reason why an interactive prototype would benefit us. We want to get our ideas in front of the users, not our applications. Therefore, we are going to take the paper screen-shot approach. To put it simply, a prototype is some medium presented to users with the intent of communicating our vision and model of the application through UWEs. The prototype may include forms, dialogs, reports, web pages, help files, or any other article that the user will see or interact with. An important thing to remember when creating these articles is that they need to be presented in such a way that makes sense to our users. For instance, don't take screen shots at 1600 x 1280 resolution, if our target user typically uses 1024 x 768. Keep an eye out for "out-of-place" stuff in your prototypes so that you don't give the user or ourselves a false impression. We have forgotten a critical part of this project. How can we move on without solving this problem? ... We've got to give our project a name. We need something cool, possibly a codename (like Longhorn or Yukon), maybe a user-friendly name, maybe an acronym. We've just got to have a name. No really, we do. The GUI elements we are going to present need a title like every other application, so lets come up with a name for this project. (Besides, who wants to work on the "Issue Tracking project" when they can work on "The Prometheus project". After some thought, I've decided that I like the idea of having a codename for this project, something cool. What are some cool code names for this project? Here are a few of my ideas. I really like cameleon (damn! misspelled it already), but I know I'll get it wrong over and over again. I really can't think of a another good project code name right now. Please suggest some good names for this project. (I'd love to hear from readers if they have any cool project code names to offer.) We're well on the way to having a name for this project, so lets dive into the prototyping. Here is a list of the GUI elements we need to mock up for our users. We could include a sample report, since that is part of UWE #5, but I am uncomfortable with this because we don't really know what fields the user wants, or if our model is going to work at all. We are most interested in validating our model, not getting too deep into details. We want to get a good feel from our users, not a list of questions a mile long. Well, I have given this a great deal of thought, and it's an important question to answer. All of the applications listed above (except the namespace extension and tray-icon) are pretty small little applications intended to make getting information in and out of the system efficiently. We want to make the user feel at-home with these applications, since they will be the most commonly used visual aspects of the system. Therefore, we need to take special care when laying out how they will work. Their appearance must feel natural to our users and must not overwhelm the users. Many times, I like to look at what the competition is doing and use their best ideas in my design. In this situation what I would be looking for is what not to do, so I am going to forgo that process on this project. Instead, I want to look at other data entry tools our developers are already using. One name keeps coming up, MS Outlook. Our users are currently using it regularly, they will be using it as part of the data entry mechanism (sending e-mails to our special accounts), so modeling our UI after this, at least seems natural. I know someone out there is thinking that not all developers use MS Outlook, and they would be right. For that reason, lets take a look at Eudora, Pegasus Mail, and The Bat!. (Note: I could not find any decent screenshots of Pegasus mail, and I did not want to install it on my PC, so I am basing my observations on the limited screen shots I could find.) Composing a new e-mail in each of these consists of popping up a new window (appears to always be sizeable) with a menu, toolbar, controls area (fields such as To, CC, Subject) and finally a large area for text. When sizing, the notes area expands horizontally and vertically and the fields size horizontally. This is pretty much the same as the way MS Outlook works, so I think we are safe taking this approach. I really think that users will find this to be a natural feeling interface. Another benefit is that we can use SDI based apps which makes it easy for the user to quickly hide, maximize, minimize and resize the windows. One potential downside to this approach is that there will be one taskbar entry for each instance of these windows. I don't think this will be too great of an issue, though, since ideally they will not have many of these open at any time. We could look into using a web interface, but I have yet to see a truly rich web interface. If you know of any, please let me know. I have prototyped the first 3 UWE's and I want to give them a good solid review before proceeding. Here is the source code to our prototype demo application. It takes command line parameters to determine which form to display. IE. "Sample1.exe" -idea IE. "Sample1.exe -idea" -bug IE. "Sample1.exe -bug" Here are the screenshots of these initial prototypes: Recording a bug: A B C Recording an idea: Fixing a bug: I feel pretty good about these screen layouts, but I don't want to do anymore until I have reviewed them and thought through any potential usability problems, glaring omissions, spelling mistakes, etc. Here are some of the issues I see with these screenshots. I used icons from known sources, so they look good. I used MS Outlook as a guideline (i.e.. words on right instead of under, similar spacing, similar icons). One thing that is missing is the little etched line between sections that MS Outlook has. I know this is nit-picky, but I would prefer the little lines to be there. Each of the main panels has a date and time entry field on them. The date field has a corresponding drop-down button with a popup calendar, but the time does not. Entering time is typically a structured type of thing (i.e.. masked edit, dropdown or something). Just the edit box does not convey much information to the user on how to enter the time. I added this list here because many times when someone notifies me of a bug, I have an immediate idea of where to go and fix it. I would like to be able to record that and look it up when I get ready to fix the bug. After some thought, I am not sure why I need a list control here. I think I'll just make that an edit box. I know how I work and there are times when I need to record structured data without hassling with redesigning forms and all that crap. I just want to create a new field and drop in the value. Maybe later I'll redesign the form to accommodate the new field. I went with a simple list control with 2 open ended fields (name and value). Seems simple enough, but what about recording dates and times, long text, images, etc.? I think this simple method is just too simple. I have a control I created for a project a few years ago that presented the user with a very simple auto-generated form, based on fields previously created. This control presented a real edit box, date picker, file picker, etc. instead of just simple edit boxes. There is a web-like link labeled "Customize" in the upper-right hand corner, that allows for quickly adding new fields. I think I will use this control here. We want to make it incredibly simple for developers to use this tool and recording that a bug was fixed is a critical link in the chain. The "Bugs fixed" list shows the user, a list of active bugs for the group/groups selected above, from which he/she can select as being fixed. The "Projects / Files involved" list shows the user, a list of recently detected file changes and allows the user to check off which ones were involved in the fix. A change snapshot for those selected files will be automatically incorporated into the bug fix record. A potential problem with these lists is that they are too small. I designed these forms for 800 x 600 and I wanted to make sure there was plenty of room left for comments, sizing the forms will increase the size of these lists, so that helps. I could shrink the comments area, but this would only buy me a small amount of space. I could add tabs to the property sheet for these individual lists and this would make for plenty of room. I don't like this though, because I can definitely see those panels never being used. These features are far more likely to be used when they are on the same page as the other critical information. After some thought, I have what I think is an innovative idea. What if I added a gripper to the bottom right of each list (like is typically found in a status bar for non-maximized windows)? This gripper would be drawn on-top of the list contents, but should be out of the way enough to be non-intrusive. The way I see this working is that a user could click on the gripper and resize the list to whatever size they want. When they are finished with the list, they click outside of the list and it snaps back to the original size. Did I spell everything right? I know what you're thinking...who cares!? Well, I think it is important to remember that first impressions are critical to many aspects of life and these prototypes will be the first impression our users get, of the application we are building. Simple problems like spelling mistakes can be a nick in that impression. On the "Fix a bug" panel, the label "Projects / files involved in fix" is really more information than is needed. Lets just change that to "Projects / Files involved" The list controls on these panels need to have their columns sized appropriately. We need to fill in the various fields for our presentation. Leaving the fields blank does not communicate as much information as having them filled can. We want to fill the fields with relevant values that make sense to our users, not "dfjhadfjasdfjhsdafhj". The code for these prototypes is a worthless piece of crap that I wouldn't put my name on without a gun pointed at my head. (In case you're wondering, I used Visual Studio .NET, MFC, some of my own classes and some fellow CPians' classes.) Is this the way prototypes should be written? Good question. Should prototypes be thrown together in bouts of uncontrolled typing without proper coding guidelines, or any concern for reuse? The answer depends entirely on how you plan to present the prototypes and how confident you are that the prototype will not become the end product. Ideally, prototypes will always be thrown away and dropped into the deepest pits of hell, never to be seen again. As we discussed earlier, this is often not what happens. If you take the paper approach, it's probably fine to just throw the code together without finesse. If you take the DemoShield approach, you're going to have to refine things a little further (like filling out menus) and picking better icons, etc. If you're going to show the users a live application, walk with care. You're just going to have to judge your audience and determine the proper mix of code quality/schedule/risk that fits the bill. I do not like spending a tremendous amount of time on creating prototypes because of their volatility and the fact that they are to be thrown away. We need to take our self-criticism and adjust our prototypes accordingly. We also need to go ahead and put together the other 3 prototype components. I am going to do this behind-the scenes, though. Tomorrow, we are going to move on to our architectural design. In the mean-time, please suggest better names for this project and critique my prototypes and ideas. We are about to have an important meeting with our users. We need to prepare for it by putting together an agenda and gathering any necessary materials we will need. We want to come off as good as possible so that we instill confidence in our abilities to get the job done, and done well. There is quite a bit of information on the net on meeting with users, and I have posted an article titled "The Standalone Programmer: Communicating with users" that you can read.
http://www.codeproject.com/Articles/4338/The-Life-of-a-Project-Issue-Tracking-Day-4?fid=15806&df=10000&mpp=50&sort=Position&spc=Relaxed&tid=524873
CC-MAIN-2015-35
refinedweb
4,000
70.43
Created on 2018-08-20 16:34 by doerwalter, last changed 2018-09-11 16:34 by ethan.furman. The __repr__ output of an enum class should use __qualname__ instead of __name__. The following example shows the problem: import enum class X: class I: pass class Y: class I(enum.Enum): pass print(X.I) print(Y.I) This prints: <class '__main__.X.I'> <enum 'I'> I would have expected it to print <class '__main__.X.I'> <enum 'Y.I'> or even for maximum consistency <class '__main__.X.I'> <enum '__main__.Y.I'> __qualname__ should be used only together with __module__. I agree that the repr of enum should be more consistent with the repr of class. Hi, I would like to work on this issue. I can submit a PR by tomorrow (or maybe even later today). Serhiy, would it be ok to put '__module__' + '.' + __qualname__ here? Serhiy, why should __qualname__ always be used together with __module__? I can't seem to find a valid reason, I've been through the pep. <made a PR due to inactivity> I've created a separate PR that also changes the __str__s. After some thinking I've come to the conclusion that making the __str__s use the fully qualified name was a bad idea. I've closed my PR. > Serhiy, why should __qualname__ always be used together with __module__? Because what is the purpose of using __qualname__? I think using more longer name in repr and/or str for *instances* of enum classes is not good idea. They are already verbose, and this will make them more verb. Serhiy said: ----------- > I think using more longer name in repr and/or str for *instances* of > enum classes is not good idea. They are already verbose, and this > will make them more verbose. I'm okay with verbose reprs, as debugging is the primary feature for those (at least for me). I'm not okay with __str__ being other than what it is now (but see below). Serhiy also. Since AddressFamily, and other stdlib converted constants, are created user `_convert`, I have no problem with that method also changing the __str__ to be `module.member' instead. Okay, I might be changing my mind. In most cases I suspect the difference would be minimal, but when it isn't, it really isn't. Take this example from a pydoc test: class Color(enum.Enum) | Color(value, names=None, *, module=None, qualname=None, type=None, start=1) | | An enumeration. | | Method resolution order: | Color | enum.Enum | builtins.object | | Data and other attributes defined here: | - | blue = <test.test_enum.TestStdLib.Color.blue: 3> + | blue = <Color.blue: 3> | - | green = <test.test_enum.TestStdLib.Color.green: 2> + | green = <Color.green: 2> | - | red = <test.test_enum.TestStdLib.Color.red: 1> + | red = <Color.red: 1> It feels like the important information is completely lost in the noise. Okay, I'm rejecting the __repr__ changes. Besides the potential verbosity, there should usually only be one of any particular Enum, __module__ and __qualname__ are both readily available when there are more than one (either on accident or by design), and users can modify their own __repr__s if they like. I'm still thinking about the change in _convert_ to modify __str__ to use the module name instead of the class name.... Here are my questions about that: - Modify just __str__ or __repr__ as well? socket.AF_UNIX instead of AddressFamily.AF_UNIX <socket.AddressFamily.AF_UNIX: 1> instead of <AddressFamily.AF_UNIX: 1> - potential confusion that actual instances of Enum in the stdlib appear differently than "regular" Enums? Or perhaps call out those differences in the documentation as examples of customization?
https://bugs.python.org/issue34443
CC-MAIN-2019-30
refinedweb
600
68.67
Ready to start learning how to develop Flash Facebook applications? You will be in a few pages. In this chapter, we will: Learn what the big deal is about Facebook, and why you should be interested in developing an application for it Get you set up with a web host, which you'll need for developing any online Facebook application Establish how much AS3 you need to know already, and what to do if you don't Take a quick look at the project that you'll be building throughout most of this book Find out how to deal with the debugging complications that arise when developing a "browser-only" application like this So let's get on with it... Seems like everyone's on Facebook these daysâpeople are on it to socialize; businesses are on it to try to attract those people's attention. But the same is true for other older social networks such as LinkedIn, Friendster, and MySpace. Facebook's reach goes far beyond these; my small town's high street car park proudly displays a "Like Us On Facebook" sign. More and more Flash games and Rich Internet Applications (RIAs) are allowing users to log in using their Facebook accountâit's a safe assumption that most users will have one. Companies are asking freelancers for deeper Facebook integration in their projects. It's practically a buzzword. But why the big fuss? Facebook benefits from the snowball effect: it's big, so it gets bigger. People sign up because most of their friends are already on it, which is generally not the case for, say, Twitter. Businesses sign up because they can reach so many people. It's a virtuous circle. There's a low barrier to entry, too; it's not just for techies, or even people who are "pretty good with computers;" even old people and luddites use Facebook. In February 2010, the technology blog ReadWriteWeb published an article called "Facebook Wants to Be Your One True Login," about Facebook's attempts to become the de facto login system throughout the Web. Within minutes, the comments filled up with posts from confused Facebook users: Evidently, the ReadWriteWeb article had temporarily become the top search result for Facebook Login, leading hundreds of Facebook users, equating Google or Bing with the Internet, to believe that this blog post was actually a redesigned Facebook.com. The comment form, fittingly, had a Sign in with Facebook button that could be used instead of manually typing in a name and e-mail address to sign a commentâand of course, the Facebook users misinterpreted this as the new Log in button. And yet⦠all of those people manage to use Facebook, keenly enough to throw a fit when it apparently became impossible to use. It's not just a site for geeks and students; it has serious mass market appeal. Even "The Social Network"âa movie based on the creation of Facebookâheld this level of appeal: it opened at #1 and remained there for its second weekend. According to Facebook's statistics page (), over 500 million people log in to Facebook in any given month (as of November 2010). For perspective, the population of the entire world is just under 7,000 million. Twitter is estimated to have 95 million monthly active users (according to the eMarketer.com September 2010 report), as is MySpace. FarmVille, the biggest game based on the Facebook platform, has over 50 million: more than half the population of either competing social network. FarmVille has been reported to be hugely profitable, with some outsider reports claiming that its parent company, Zynga, has generated twice as much profit as Facebook itself (though take this with a grain of salt). Now, of course, not every Facebook game or application can be that successful, and FarmVille does benefit from the same snowball effect as Facebook itself, making it hard to compete withâbut that almost doesn't matter; these numbers validate Facebook as a platform on which a money-making business can be built. As the aforementioned ReadWriteWeb article explained, Facebook has become a standard login across many websites. Why add yet another username/password combination to your browser's list (or your memory) if you can replace them all with one Facebook login? This isn't restricted to posting blog comments. UK TV broadcaster, Channel 4, allows viewers to access their entire TV lineup on demand, with no need to sign up for a specific Channel 4 account: Again, Facebook benefits from that snowball effect: as more sites enable a Facebook login, it becomes more of a standard, and yet more sites decide to add a Facebook login in order to keep up with everyone else. Besides login capabilities, many sites also allow users to share their content via Facebook. Another UK TV broadcaster, the BBC, lets users post links for their recommended TV programs straight to Facebook: Blogsâor, indeed, many websites with articlesâallow readers to Like a post, publishing this fact on Facebook and on the site itself: So half a billion people use the Facebook website every month, and at the same time, Facebook spreads further and further across the Internetâand even beyond. "Facebook Messages" stores user's entire conversational histories, across e-mail, SMS, chat, and Facebook itself; "Facebook Places" lets users check into a physical location, letting friends know that they're there. No other network has this reach. With all this expansion, it's difficult for a developer to keep up with the Facebook platform. And sometimes there are bugs, and undocumented areas, and periods of downtime, all of which can make development harder still. But the underlying systemâthe Graph API, introduced in April 2010âis fascinating. The previous API had become bloated and cumbersome over its four years; the Graph API feels well-designed with plenty of room for expansion. This book mainly focuses on the Graph API, as it is the foundation of modern Facebook development. You'll be introduced to it properly in Chapter 2, Welcome to the Graph. If you're not on Facebook already, sign up now (for free) at. You'll need an account in order to develop applications that use it. Spend some time getting used to it: Set up a personal profile. Post messages to your friends on their Walls. See what all the FarmVille fuss is about at. Check in to a location using Facebook Places. Log in to some blogs using your Facebook account. Share some YouTube videos on your own Wall from the YouTube website. "Like" something. Go native! If you've already got a publicly accessible web server or are signed up to a web host to which you can upload SWFs and HTML pages via FTP, skip to the How much AS3 knowledge is required? section. I'll assume that you roughly know how the Internet works: when you type a URL into a web browser on your computer and hit Go, it retrieves all the pages and images it needs from another computer, the web server, and displays them. The exact methods it uses to find the web server and the protocols for how the information gets back to your computer aren't relevant here. You could go out and buy a computer, install some server software, and hook it up to your Internet connection, and you'd have a functional web server. But you'd have to maintain it and keep it secure, and your ISP probably wouldn't be very happy about you sending all those pages and images to other people's browsers. A better option is to pay another company to take care of all of that for youâa web host. In order to build an online SWF-based application or game that allows users to log in with their Facebook account (with the SWF being able to access their profile, list of friends, Wall, and so on), you will require control over a web page. Technically, you could probably come up with some hack that would allow you to get around thisâperhaps by hosting everything on Google sites and MegaSWFâbut in the long run it's not going to be worth it. Splash out on a web host for the sake of learning; you will definitely need access to one if you do professional Facebook application development in the future. There are a huge number of web hosts to choose from, and an even bigger number of configurable options between them. How much disk space do you need? How much bandwidth per month? How much processing power? Some hosts will give you a server all to yourself, while others will put your files on the same computer as other customers. And of course, you have to wonder how good the customer service is and how reliable the company is at keeping their servers online. Throw in a few terms such as "cloud hosting" and it's enough to make your head spin. All you need is a host that allows you to upload HTML files and SWFs; this book also assumes that you'll be able to use FTP to transfer files from your computer to the host, though this isn't strictly necessary. Want to just get started without wasting time comparing hosts? Go with Media Temple. The code in this book was all tested using a Media Temple Grid Service account, available at. It provides much more than what you'll need for completing the projects in this book, granted, and at $20/month. It's not the cheapest option available, but the extra service and features will definitely come in handy as you build your own Facebook applications and games. You'll need an HTML editor for editing web pages. FlashDevelop and Flash Builder both do good jobs at this; otherwise, try: Notepad++ for Windows (free): Text Mate for Mac: Komodo Edit for Mac and Windows (free): And in order to transfer your files from your computer to your web host, you'll probably need an FTP client. Check out FileZilla (it's free and available for both Windows and Mac) at. Documentation for this is available at, and your web host will almost certainly provide instructions on connecting to it via FTP (Media Temple's instructions can be found at) Web hosts will generally assign you a very generic address, such as or. If you want to have a more condensed personal address such as, you'll need to pay for it. This is called a domain nameâin this specific example, michaeljw.com is the domain name. Media Temple allows you to buy a domain name for $5/year at the point where you sign up to their web hosting package. If you go with another host, you may need to buy a domain name elsewhere; for this, you can use. You don't need to own a domain name to use this book, though. The generic addresses that your web host assigns you will be fine. Throughout the book, it'll be assumed that your website address (either generic or domain name) is. Pick a web host, get your credit card out, and sign up for one of their packages. Create a new directory called /test/in the public path of your web host. Create a new plain text file on your hard drive called index.html. (It's a good idea to create a new folder on your computer to store all your work, too.) Open this file in your HTML editor. Copy the HTML below into the file: <html> <head> <title>Test</title> </head> <body> <h2>Hello!</h2> </body> </html> Hopefully, you know enough HTML to understand that this just writes Hello! in big letters. Transfer index.htmlto the /text/directory on your host. Again, you'll probably need to use an FTP client for this. Open a web browser and type into the URL bar. Of course, you should replace with the path to your public directory, as given to you by your web host. You should see Hello! appear in a glorious default font: If not, check the documentation and support for your host. You'll need to know some AS3 before you start using this book. Sure, it's a "Beginner's Guide", but beginner refers to your knowledge of Facebook development, not Flash development! All of the code in this book is written using classes inside AS files; there's no timeline code at all. You don't have to be an OOP guru to follow it, but you must be familiar with class-based coding. If you aren't, check out these two resources: How To Use A Document Class In FlashâA short tutorial to get you up to speed with using document classes in Flash CS3 and above:. AS3 101âA series of tutorials to walk you through the basics of AS3 development. In particular, read from Part 8 onwards, as these deal with OOP in AS3:. You should also know how to create and compile a SWF project, and be familiar enough with HTML to be able to embed a SWF in it. We'll use SWFObject for this purpose (this is the default embed method used by Flash CS5); if you're not sure what this means, familiarize yourself here:. All important AS3 classes and keywords used in this book will be briefly explained as they become relevant, so don't worry if you haven't memorized the LiveDocs yet. Speaking of LiveDocs, remember that you can always use them to look up unfamiliar code:. At the start of Chapter 2, Welcome to the Graph, you'll be given a Flash project that's just an empty user interfaceâit'll be up to you to build the backend using the lessons you learn from Chapters 2 through 6. This project is called Visualizer, and contains the class structure and all the UI for an application that can be used to represent all of the information stored on Facebook. You'll go far beyond simply allowing people to log in to the application and grabbing their username; there is so much more that can be achieved with AS3 and the Graph API, and you'll learn about all of it. Although the project is complex, the classes have been arranged in such a way that you need to modify only a small number of them, and these have little or no code in them to begin with. This means that you don't have to dive into mountains of code that you didn't write! You can focus entirely on learning about the Facebook side of Flash development. Each of the Chapters from 2 to 6 has two associated ZIP files: one for the start of the project at the start of the chapter, and one for the end. This means you could skip through those chapters in any order, but you'll find it must easier to learn if you go through them in sequence. All project files are available in forms that are compatible with Flash CS3 and above, Flash Builder, and FlashDevelopâand if you use a different Flash editor, you should find it easy to convert the project. When you first compile the project, it'll look like this: Nothing much to see. But before long, you'll have added features so that it can be used to explore Facebook, rendering different Pages and Photos: By the end of Chapter 6, you'll be happily adding code to search for users by name, exploring their personal profiles, and posting images and links to their Wall: â¦plus plenty more besides! In September 2010, Adobe released an official Adobe ActionScript 3 SDK for the Facebook Platform Graph API, which will remain fully supported by Adobe and Facebook. Read more about it at. This book will teach you how to use this SDK, as it is a standard technology. However, the main aim of this book is to teach you the underlying concepts of Facebook Flash development; once you understand these, the actual code and the SDK used don't matter. For this reason, this book will also teach you how to program every sort of Facebook interaction you might need from scratch. The code will be all yours, and you'll understand every line, with no abstraction in the way. Besides the Adobe AS3 SDK for Facebook Platform, two other code libraries are used heavily: MinimalComps: Keith Peters' excellent, lightweight user interface components, available at under an MIT license. as3corelib: A collection of classes and utilities for working with AS3, including classes for JSON serialization, available at under a BSD license. From Chapter 3 onwards your SWF will need to be run from your server, through a web browser, in order to work. (Find out why in that chapter.) This makes debugging trickyâthere's no Output panel in the browser, so trace statements aren't automatically visible. The Visualizer contains a dialog feature which you can use to work around this. It can be created from any class that is in the display list. To do so, first import the DialogEvent class: import events.DialogEvent; Then, dispatch a DialogEvent of type DIALOG with an argument containing the text you wish to see output: dispatchEvent(new DialogEvent(DialogEvent.DIALOG, "Example")); It will look like this: Of course, that's useful only for the Visualizer project. What can you do when you build your own? There are a few tools that will help: De MonsterDebugger: Excellent tool for general AS3 debugging:. Flash Tracer for Firebug: This Firefox tool lets you see tracestatements from any SWF, as long as you have the debug version of Flash Player installed in your browser:. Vizzy Flash Tracer: Similar to Flash Tracer for Firebug, but also works for Internet Explorer and Chrome:. SOS max: Creates a socket server on your computer to which an AS3 project can send data; this data will then be logged and can then be viewed:. In Chapter 3, you'll learn how to run a JavaScript function in your web page from the AS3 in your SWF. One JavaScript function, alert(), creates a little window containing any String passed to it, like so: This is a quick and simple way to display one-off messages without using trace. When you run a SWF using Flash Player on your desktop, it loads and runs the SWF. Well, of course, why wouldn't it? When you run a SWF in a browser, this isn't always the case, though. Sometimes, browsers cache SWFs, meaning that they save a copy locally and then load that copyârather than the online versionâthe next time you request it. In normal browsing, this is a great ideaâit saves bandwidth and reduces loading times. You can lose huge amounts of time trying to figure out why your new code isn't working, only to finally realize that the new code isn't being run at all because you were seeing only a cached copy of your SWF. Different browsers require different solutions. It's usually possible to disable caching for one browsing session, and it's always possible to delete some or all of the cache. In Google Chrome, you can do this by clicking on [Spanner] | Tools | Clear Browsing Dataâ¦, selecting Empty the cache, and choosing an appropriate time period: You should easily be able to find the equivalent option for your browser by searching Google for «browser name» delete cache. Facebook's developers are always tweaking the platform. This can make it exciting to develop on because new features are being added all the time, but it can also make it very frustrating to develop on because old features can be removed, or their implementations changed; anything could be altered at any time. The new Platform API (the Graph API) is a strong foundation, and looks likely to be around for a whileâremember, the previous Platform API lasted four years. But it's modular, and individual pieces might change, or even be removed. It's possible then that parts of this book may be out-of-date by the time you read it, and some of the instructions might not give the same results with the current version of Facebook platform as they did when this book was written. If you're concerned about this, you can find out how to keep up-to-date with any platform changes in the last section of Chapter 8, Keeping Up With The Zuckerbergs. But for now, dive into Chapter 2, Welcome to the Graph and start developing!
https://www.packtpub.com/product/facebook-graph-api-development-with-flash/9781849690744
CC-MAIN-2020-40
refinedweb
3,436
67.18
59766/search-working-browser-firefox-chrome-selenium-python-script Could be a race condition where the find element is executing before it is present on the page. Take a look at the wait timeout documentation. public class SiblingAndParentInXpath { ...READ MORE Try to search above xpath in chrome ...READ MORE Along with the --headless option, you should ...READ MORE Yes, you are indeed rigth. For Selenium ...READ MORE So this is how you do it ...READ MORE Using Python/selenium running without headless mode is ...READ MORE This is a flaw with ChromeDriver. Tried ...READ MORE How about you try adding the below ...READ MORE Unfortunately, Selenium IDE will be deprecated soon. ...READ MORE element = findElement(By.xpath("//*[@test-id='test-username']"); element = findElement(By.xpath("//input[@test-id='test-username']"); (*) - any ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/59766/search-working-browser-firefox-chrome-selenium-python-script
CC-MAIN-2020-29
refinedweb
140
62.95
12.7: Function Pointer in C - Page ID - 29110 Function Pointer in C In C, like normal data pointers (int *, char *, etc), we can have pointers to functions. Following is a simple example that shows declaration and function call using function pointer. ; } This is the same as what we have already learned about pointers...just that we are now pointing at functions. The output would be what we expect: Value of a is 10 Why do we need an extra bracket around function pointers like fun_ptr in above example? If we remove bracket, then the expression “void (*fun_ptr)(int)” becomes “void *fun_ptr(int)” which is declaration of a function that returns void pointer. Following are some interesting facts about function pointers. 1) Unlike normal pointers, a function pointer points to code, not data. Typically a function pointer stores the start of executable code. 2) Unlike normal pointers, we do not allocate de-allocate memory using function pointers. 3) A function’s name can also be used to get functions’ address. For example, in the below program, we have removed address operator ‘&’ in assignment. We have also changed function call by removing *, the program still works. ; } Program works, output is the same: Value of a is 10 4) Like normal pointers, we can have an array of function pointers. Below example in point 5 shows syntax for array of pointers. 5) Function pointer can be used in place of switch case. For example, in below program, user is asked for a choice between 0 and 2 to do different tasks. #include <iostream> using namespace std; void add(int a, int b) { cout << "Addition is " << a+b << endl; } void subtract(int a, int b) { cout << "Subtraction is " << a-b << endl; } void multiply(int a, int b) { cout << "Multiplication is " << a*b << endl; } int main() { // fun_ptr_arr is an array of function pointers void (*fun_ptr_arr[])(int, int) = {add, subtract, multiply}; unsigned int ch, a = 15, b = 10; for (int count = 0; count < 3; count++) { ch = unsigned (count); (*fun_ptr_arr[ch])(a, b); } return 0; } This is a bit more complex, the key is that the function pointer is an array - pointing at the 3 functions. In the for loop the value of ch is 0, 1 and 2 - thereby calling the addition function, fun_ptr_arr[0], the subtraction function, fun_ptr_arr[1], and the multiplication function, fun_ptr_arr[2]. Also, the assignment of ch= unsigned (count) - this is taking the integer count and forcing it to be an unsigned integer, then assigning it to ch. The variables a, and b, are also unsigned integers, but when these values are copied (call by value - remember) to the functions notice that the functions believe the arguments are integers. Addition is 25 Subtraction is 5 Multiplication is 150 Adapted from: "Function Pointer in C" by Abhay Rathi, Geeks for Geeks
https://eng.libretexts.org/Courses/Delta_College/C___Programming_I_(McClanahan)/12%3A_Pointers/12.07%3A_Function_Pointer_in_C
CC-MAIN-2022-40
refinedweb
469
59.13
Have you ever solved a real-life maze? The approach that most of us take while solving a maze is that we follow a path until we reach a dead end, and then backtrack and retrace our steps to find another possible path. This is exactly the analogy of Depth First Search (DFS). It's a popular graph traversal algorithm that starts at the root node, and travels as far as it can down a given branch, then backtracks until it finds another unexplored path to explore. This approach is continued until all the nodes of the graph have been visited. In today’s tutorial, we are going to discover a DFS pattern that will be used to solve some of the important tree and graph questions for your next Tech Giant Interview! We will solve some Medium and Hard Leetcode problems using the same common technique. So, let’s get started, shall we? Implementation Since DFS has a recursive nature, it can be implemented using a stack. DFS Magic Spell: - Push a node to the stack - Pop the node - Retrieve unvisited neighbors of the removed node, push them to stack - Repeat steps 1, 2, and 3 as long as the stack is not empty Graph Traversals In general, there are 3 basic DFS traversals for binary trees: - Pre Order: Root, Left, Right OR Root, Right, Left - Post Order: Left, Right, Root OR Right, Left, Root - In order: Left, Root, Right OR Right, Root, Left 144. Binary Tree Preorder Traversal (Difficulty: Medium) To solve this question all we need to do is simply recall our magic spell. Let's understand the simulation really well since this is the basic template we will be using to solve the rest of the problems. At first, we push the root node into the stack. While the stack is not empty, we pop it, and push its right and left child into the stack. As we pop the root node, we immediately put it into our result list. Thus, the first element in the result list is the root (hence the name, Pre-order). The next element to be popped from the stack will be the top element of the stack right now: the left child of root node. The process is continued in a similar manner until the whole graph has been traversed and all the node values of the binary tree enter into the resulting list. 145. Binary Tree Postorder Traversal (Difficulty: Hard) Pre-order traversal is root-left-right, and post-order is right-left-root. This means post order traversal is exactly the reverse of pre-order traversal. So one solution that might come to mind right now is simply reversing the resulting array of pre-order traversal. But think about it – that would cost O(n) time complexity to reverse it. A smarter solution is to copy and paste the exact code of the pre-order traversal, but put the result at the top of the linked list (index 0) at each iteration. It takes constant time to add an element to the head of a linked list. Cool, right? 94. Binary Tree Inorder Traversal (Difficulty: Medium) Our approach to solve this problem is similar to the previous problems. But here, we will visit everything on the left side of a node, print the node, and then visit everything on the right side of the node. 323. Number of Connected Components in an Undirected Graph (Difficulty: Medium) Our approach here is to create a variable called ans that stores the number of connected components. First, we will initialize all vertices as unvisited. We will start from a node, and while carrying out DFS on that node (of course, using our magic spell), it will mark all the nodes connected to it as visited. The value of ans will be incremented by 1. import java.util.ArrayList; import java.util.List; import java.util.Stack; public class NumberOfConnectedComponents { public static void main(String[] args){ int[][] edge = {{0,1}, {1,2},{3,4}}; int n = 5; System.out.println(connectedcount(n, edge)); } public static int connectedcount(int n, int[][] edges) { boolean[] visited = new boolean[n]; List[] adj = new List[n]; for(int i=0; i<adj.length; i++){ adj[i] = new ArrayList<Integer>(); } // create the adjacency list for(int[] e: edges){ int from = e[0]; int to = e[1]; adj[from].add(to); adj[to].add(from); } Stack<Integer> stack = new Stack<>(); int ans = 0; // ans = count of how many times DFS is carried out // this for loop through the entire graph for(int i = 0; i < n; i++){ // if a node is not visited if(!visited[i]){ ans++; //push it in the stack stack.push(i); while(!stack.empty()) { int current = stack.peek(); stack.pop(); //pop the node visited[current] = true; // mark the node as visited List<Integer> list1 = adj[current]; // push the connected components of the current node into stack for (int neighbours:list1) { if (!visited[neighbours]) { stack.push(neighbours); } } } } } return ans; } } 200. Number of Islands (Difficulty: Medium) This falls under a general category of problems where we have to find the number of connected components, but the details are a bit tweaked. Instinctually, you might think that once we find a “1” we initiate a new component. We do a DFS from that cell in all 4 directions (up, down, right, left) and reach all 1’s connected to that cell. All these 1's connected to each other belong to the same group, and thus, our value of count is incremented by 1. We mark these cells of 1's as visited and move on to count other connected components. 547. Friend Circles (Difficulty: Medium) This also follows the same concept as finding the number of connected components. In this question, we have an NxN matrix but only N friends in total. Edges are directly given via the cells so we have to traverse a row to get the neighbors for a specific "friend". Notice that here, we use the same stack pattern as our previous problems. That's all for today! I hope this has helped you understand DFS better and that you have enjoyed the tutorial. Please recommend this post if you think it may be useful for someone else!
https://www.freecodecamp.org/news/dfs-for-your-next-tech-giant-interview/
CC-MAIN-2020-29
refinedweb
1,045
69.31
choose a phone number Following code shows a minimal example to let you choose from your contacts in the phonebook. It uses contacts module of Python Code # import modules import appuifw, contacts # Open contacts database db = contacts.open() # Create two empty lists for names and corresponding numbers names = [] numbers = [] # Append contact names and numbers to lists for i in db: names.append(db[i].title) # Get contact title num = db[i].find('mobile_number') # Find number for the contact if num: numbers.append(num[0].value) # first mobile else: numbers.append(None) # Select the contact from selection list i = appuifw.selection_list(names) t = numbers[i] appuifw.note(u"Selected Phone Number is:" +t,'conf') # gives the number as output Postconditions Below are the screenshots of the above script
http://developer.nokia.com/community/wiki/How_to_choose_a_phone_number
CC-MAIN-2014-52
refinedweb
127
51.04
A SEO friendly URL supplies request parameters as a part of the URI. Such as /edit-game/zombi-2014 rather than /edit-game?gameId=zombi-2014. This is because Google does not index any text that comes after “?”. By default, Struts 2 has SEO friendly URL disabled. But, this can be enabled rather easily. In this tutorial, I will show you how to do that in an application using the convention plugin and annotation. Enable Parameter in URI Open your struts.xml and add these constants: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 2.3//EN" ""> <struts> <constant name="struts.mapper.alwaysSelectFullNamespace" value="false"/> <constant name="struts.enable.SlashesInActionNames" value="true"/> <constant name="struts.patternMatcher" value="namedVariable"/> </struts> Save changes. Develop the Action Class In your action class, define a property where the parameter will be saved. For example, gameId below. public class GameAction { private String gameId; //getGameId, setGameId etc. //... } Next, for an action handler method, specify the property name in the action URI. @Action("/edit-game/{gameId}") public String editGame() { System.out.println("Game ID is: " + gameId); return "form"; } That’s it. Now, you can use the URI /edit-game/zombi-2014 to execute the action handler method. Advertisements 2 thoughts on “SEO Friendly URL in Struts 2” I’m getting error in @Action -> annotation. Is there any dependency or import needed? I’ve added some dependency, But didn’t get any solution. Please Help. Hi, this post is several years old. May be there are new ways of doing this.
https://mobiarch.wordpress.com/2014/08/13/seo-friendly-url-in-struts-2/
CC-MAIN-2018-13
refinedweb
261
53.98
The following HP C++ examples demonstrate how symbols are resolved when you link with compiler-generated UNIX-style weak and group symbols. The examples apply a user-written function template called myswap . Note that you can also use class templates, which are implemented in a similar manner. If you are an experienced C++ programmer, you will also recognize that there is a "swap" function in the HP C++ standard library, which you should use instead of writing your own function. In the examples, the compiler combines code sections (and other required data) into a group, giving it a unique group name derived from the template instantiation. The linker includes the first occurrence of this group in the image. All UNIX-style weak definitions obtained from that group are now defined by the module providing this group. All subsequent groups with the same name do not contribute code or data; that is, the linker ignores all subsequent sections. The UNIX-style weak definitions from these ignored sections become references, which are resolved by the definition from the designated instance (that is, first-encountered instance) of the group. In this manner, code (and data) from templates are included only once for the image. Example 2-3 shows UNIX-Style weak symbols and group symbols. // file: my_asc.cxx template <typename T> (1) void myswap (T &v1, T &v2) { (2) T tmp; tmp = v1; v1 = v2; v2 = tmp; } void ascending (int &v1, int &v2) { if (v2<v1) myswap (v1,v2); (3) } // file: my_desc.cxx template <typename T> (1) void myswap (T &v1, T &v2) { (2) T tmp; tmp = v1; v1 = v2; v2 = tmp; } void descending (int &v1, int &v2) { if (v1<v2) myswap (v1,v2); (3) } // file: my_main.cxx #include <cstdlib> #include <iostream> using namespace std; static int m = 47; static int n = 11; template <typename T> void myswap (T &v1, T &v2); extern void ascending (int &v1, int &v2); extern void descending (int &v1, int &v2); int main (void) { cout << "original: " << m << " " << n << endl; myswap (m,n); (4) cout << "swapped: " << m << " " << n << endl; ascending (m,n); cout << "ascending: " << m << " " << n << endl; descending (m,n); cout << "descending: " << m << " " << n << endl; return EXIT_SUCCESS; } Example 2-4 shows the compile and link commands. $ CXX/OPTIMIZE=NOINLINE/STANDARD=STRICT_ANSI MY_MAIN (5) $ CXX/OPTIMIZE=NOINLINE/STANDARD=STRICT_ANSI MY_ASC (6) $ CXX/OPTIMIZE=NOINLINE/STANDARD=STRICT_ANSI MY_DESC (6) $ CXXLINK MY_MAIN, MY_ASC, MY_DESC (7) The linker includes the first occurrence of this group in the image. All UNIX-style weak definitions obtained from that group are now defined by the module providing this group. All subsequent groups with the same name do not contribute code or data; that is, the subsequent sections are ignored. The UNIX-style weak definitions from these ignored sections become references, which are resolved by the definition from the designated instance (first-encountered) of the group. In this manner, code (and data) from templates are included only once for the image. To create a VMS shareable image, you must define the interface in a symbol vector at link time with a SYMBOL_VECTOR option. HP C++ generated objects contain mangled symbols and may contain compiler-generated data, which belongs to a public interface. In the SYMBOL_VECTOR option, the interface is describe with the names from the object modules. Because they contain mangled names, such a relationship may not be obvious from the source code and the symbols as seen in an object module. If you do not export all parts of an interface, code that is intended to update one data cell may be duplicated in the executable and the shareable image along with the data cell. That is, data can become inconsistent at run-time, producing a severe error condition. This error condition can not be detected at link time nor at image activation time. Conversely, if you export all symbols from an object module, you may export the same symbol which is already public from other shareable images. A conflict arises when an application is linked with two shareable images that export the same symbol name. In this case, the linker flags the multiple definitions with a MULDEF warning that should not be ignored. This type of error most often results when using templates defined in the C++ standard library but instantiated by the user with common data types. Therefore, HP recommends that you only create a shareable image when you know exactly what belongs to the public interface. In all other cases, use object libraries and let applications link against these libraries. The HP C++ run-time library contains pre-instantiated templates. The public interfaces for these are known and therefore, the HP C++ run-time library ships as a shareable image. The universal symbols from the HP C++ run-time library and the group symbols take precedence over user instantiated templates with the same data types. As with other shareable images, this design is upwardly compatible and does not require you to recompile or relink to make use of the improved HP C++ run-time library. 2.7 Understanding and Fixing DIFTYPE and RELODIFTYPE Linker Conditions On OpenVMS I64 systems, if a module defines a variable as data (OBJECT), it must be referenced as data by all other modules. If a module defines a variable as a procedure (FUNC), it must be referenced as a procedure by all other modules. When data is referenced as a procedure, the linker displays the following informational message: %ILINK-I-DIFTYPE, symbol symbol-name of type OBJECT cannot be referenced as type FUNC When a procedure is referenced as data, the following informational message is displayed: %ILINK-I-DIFTYPE, symbol symbol-name of type FUNC cannot be referenced as type OBJECT Type checking is performed by the linker on OpenVMS I64 because the linker must create function descriptors. The equivalent procedure descriptor was created by the compiler on OpenVMS Alpha, so this informational message is new for the linker on OpenVMS I64. This message is informational only and does not require user action. However, if the linker detects data referenced as a procedure, it might issue the following warning message in addition to the DIFTYPE message: %ILINK-W-RELODIFTYPE, relocation requests the linker to build a function descriptor for a non-function type of symbol The following example of two modules demonstrates how to fix these conditions: TYPE1.C #include <stdio> int status ; // Defines status as data. extern int sub(); main () { printf ("Hello World\n"); sub(); } TYPE2.C extern int status (int x) ; // Refers to status as a procedure. sub () { int x; x = (int)status; return status (x); } When these modules are linked, you get an informational message and a warning message, as follows: $ CC/EXTERN_MODEL=STRICT_REFDEF TYPE1 $ CC/EXTERN_MODEL=STRICT_REFDEF TYPE2 $ LINK TYPE1,TYPE2 %ILINK-I-DIFTYPE, symbol STATUS of type OBJECT cannot be referenced as type FUNC module: TYPE2 file: NODE1$:[SMITH]TYPE2.OBJ;6 %ILINK-W-RELODIFTYPE, relocation requests the linker to build a function descriptor for a non-function type of symbol symbol: STATUS relocation section: .rela$CODE$ (section header entry: 18) relocation type: RELA$K_R_IA_64_LTOFF_FPTR22 relocation entry: 0 module: TYPE2 file: NODE1$:[SMITH]TYPE2.OBJ;6 To correct the problem and avoid the informational and warning messages, correct TYPE1.C to define status as a procedure: TYPE1.C #include <stdio> int status (int x); // Defines status as a procedure. extern int sub(); main () { printf ("Hello World\n"); sub(); } nt status (int x) { return 1; } $ CC/EXTERN_MODEL=STRICT_REFDEF TYPE1 $ CC/EXTERN_MODEL=STRICT_REFDEF TYPE2 $ LINK TYPE1,TYPE2 This chapter describes how the linker creates an image on OpenVMS I64 systems. The linker creates images from the input files you specify in a link operaton. on I64 Systems After creating segments and filling them with binary code and data, the linker writes the image to an image file. Section 3.4.2 describes this process. 3.2 Creating Sections Language processors create sections and define their attributes. The number of sections created by a language processor and the attributes of these sections are dependent upon language semantics. For example, some programming languages implement global variables as separate sections with a particular set of attributes. Programmers working in high-level languages typically have little direct control over the sections created by the language processor. Medium- and low-level languages provide programmers with more control over section creation. For more information about the section creation features of a particular programming language, see the language processor documentation. The I64 linker also creates sections that are combined with the compiler sections to create segments (see Section 3.2.1). Section Attributes The language processors define the attributes of the sections they create and communicate these attributes to the linker in the section header table. Section attributes define various characteristics of the area of memory described by the section, such as the following: Section attributes are Boolean values, that is, they are either on or off. Table 3-2 lists all section attributes with the keyword you can use to set or clear the attribute, using the PSECT_ATTR= option. (For more information about using the PSECT_ATTR= option, see Section 3.3.7.) For example, to specify that a section should have write access, specify the writability attribute as WRT. To turn off an attribute, specify the negative keyword. Some attributes have separate keywords that express the negation of the attribute. For example, to turn off the global attribute (GBL), you must specify the local attribute (LCL). Note that the alignment of a section is not strictly considered an attribute of the section. However, because you can set it using the PSECT_ATTR= option, it is included in the table. To be compatible with Alpha and VAX linkers, the I64 linker retains the user interfaces as much as possible. This information includes the traditional OpenVMS section attribute names (WRT, EXE, and so on) that are used in the PSECT_ATTR= option. However, on I64, the underlying object conforms to the ELF standard. When processing the object module, the linker maps the ELF terms to the OpenVMS terms. For compatibility, only OpenVMS terms are written to the map file. In contrast, other tools, such as the ANALYZE/OBJECT utility, do not use OpenVMS terms; they simply format the contents of the object file and therefore display the ELF terms. Table 3-1 maps the traditional OpenVMS section attribute names to the ELF names and vice versa. Table 3-2 lists all section attributes with the keyword you can use to set or clear the attribute, using the PSECT_ATTR= option.
http://h71000.www7.hp.com/doc/83final/4548/4548pro_006.html
CC-MAIN-2014-42
refinedweb
1,744
50.57
Just sharing stuff… Thanks to a good friend and colleague**, last week, I learned about the existence of the Metro Manila Development Authority Live Traffic Monitoring System. Below is a video clip of the traffic data generated by the MMDA platform. Each of the circle represents an MMDA metrobase or station where traffic along a specific road segment is assessed (every 15 minutes). Traffic flow is only classified into three states: Light, Moderate, or Heavy. The data cover the period when there was a (INC) mass gathering along Megamall and Shaw Boulevard area—the visualization has actually captured this. In the viz, SM Megamall and Ortigas Avenue are mostly on a “heavy” traffic situation—for both the north and south bounds. For the southbound traffic, road segments before Shaw Boulevard sustained heavy traffic flows; after which, it was medium to light all the way. For the northbound traffic, on the other hand, everything before SM Megamall was jammed and all bases after were in either medium or light traffic condition. This clearly illustrates where the bottleneck was. Details behind the visualization are discussed in the next section. The Metro Manila Development Authority Live Traffic Monitoring System gives traffic flow reports at intersections around Metro Manila, and is updated every 10 to 15 minutes by the MMDA MetroBase. As I understand, MMDA officers from post manually key in the status. The data do not directly provide traffic velocity, but instead, they indicate whether the traffic flows along road segments are: Light, Moderate, or Heavy (LMH). Aside from the LMH statuses, the platform also gives alerts—whether there’s an accident, a road construction, or a rally happening, among others. When I saw the site, I had the urge to revisit web scraping (again). I thought that this was a good personal project to “refresh” ones web scraping skills, and data viz skills as well. For web scraping, I used the python packages urllib2 and BeautifulSoup. The latter helps to find tags in whatever html page one is looking at, allowing the extraction of only the necessary information on a page. For page downloading, I used urllib2—a library for getting and opening URLs. If you want to mine data from a website with URL URL, just execute: import urllib2 mypage = urllib2.Request(URL) thepage = urllib2.urlopen(mypage).read() Then use the BeautifulSoup package to “read” the page. from bs4 import BeautifulSoup soup = BeautifulSoup(thepage) Since the update is done every fifteen minutes, it is necessary to put a pause in the system at every iteration: import time def sleep(): print "Sleeping for 15 minutes" time.sleep(60*15) return All scraped (“streaming”) information are stored in a database (sqlite). For each area ( k, i.e. Edsa, C5, etc.), a tower ( towerName) is assigned (this is probably the intersection) that gives both the northbound ( nstat) and southbound ( sstat) statuses along a specific road segment. entry = MMDATraffic() entry.LINE = k entry.TOWER_ID = towerID entry.TOWER_NAME = towerName entry.NB_STATUS = nstat entry.SB_STATUS = sstat entry.UPDATE_INFO = dt entry.DATETIME = mytime dbsession.add(entry) The final code is less than 100 lines! My colleague Ed is currently playing with SVG scripting to dynamically visualize the scraped data. While he is finishing the SVG rendering, I used a visualization software to have a peek at the data. Status: 1 – Light, 2 – Moderate, 3 – Heavy. Visualization Snapshot. The orange plots are for the north-bound traffic; the blue plots, on the other hand, are for the south-bound. The snapshot is at 5PM of August 29, 2015. Note that there is a rally going on at this hour along EDSA. Thus, one may consider this dynamics an anomaly (compared to regular Fridays). The subplots on top show the number of Light (1), Moderate (2), and Heavy (3) traffic flows along the given lines (EDSA, Ortigas Ave., C5, etc.). Each of the circles at the bottom plots are the “sensing towers”. The color (light to dark) and the size (small to big) indicate the status along the specific junction. Thanks to Ed David for letting me use his code on databasing, particularly, creating the sql engine using sqlalchemy. ** Follow Reina Reyes’s blog for updates on their work on the MMDA data and Manila traffic.
https://erikafille.ph/2015/09/01/web-scraping-with-urllib2-and-beautifulsoup/
CC-MAIN-2020-05
refinedweb
706
57.16
Activates paging or swapping to a designated block device. Standard C Library (libc.a) #include <sys/vminfo.h> int swapon ( PathName) char *PathName; The swapon subroutine makes the designated block device available to the system for allocation for paging and swapping. The specified block device must be a logical volume on a disk device. The paging space size is determined from the current size of the logical volume. If an error occurs, the errno global variable is set to indicate the error: Other errors are from calls to the device driver's open subroutine or ioctl subroutine. This subroutine is part of Base Operating System (BOS) Runtime. The swapoff subroutine,swapqry subroutine. The swapoff command, swapon command. The Subroutines Overview in AIX 5L Version 5.1 General Programming Concepts: Writing and Debugging Programs.
http://ps-2.kev009.com/wisclibrary/aix51/usr/share/man/info/en_US/a_doc_lib/libs/basetrf2/swapon.htm
CC-MAIN-2022-27
refinedweb
133
52.26
Scott Weinstein on .Net, Linq, PowerShell, WPF, and WCF I have an idea that it may be possible to predict build success/failure based on commit data. Why Scala? It’s a JVM language, has lots of powerful type features, and it has a linear algebra library which I’ll need later. Neither maven or the scala build tool (sbt) are completely satisfactory. This maven **archetype** (what .Net folks would call a VS project template) mvn archetype:generate `-DarchetypeGroupId=org.scala-tools.archetypes `-DarchetypeArtifactId=scala-archetype-simple `-DremoteRepositories= `-DgroupId=org.SW -DartifactId=BuildBreakPredictor gets you started right away with “hello world” code, unit tests demonstrating a number of different testing approaches, and even a ready made `.gitignore` file - nice! But the Scala version is behind at v2.8, and more seriously, compiling and testing was painfully slow. So much that a rapid edit – test – edit cycle was not practical. So Lab49 colleague Steve Levine tells me that I can either adjust my pom to use fsc – the fast scala compiler, or use sbt. Sbt has some nice features And some real limitations Side note: If a language has a fast compiler, why keep the slow compiler around? Even worse, why make it the default? I choose sbt, for the faster development speed it offers. Scala APIs really like to use punctuation – sometimes this works well, as in the following map1 |+| map2 The `|+|` defines a merge operator which does addition on the `values` of the maps. It’s less useful here: http(baseUrl / url >- parseJson[BuildStatus] sure you can probably guess what `>-` does from the context, but how about `>~` or `>+`? I’m still learning, so not much to say just yet. However case classes are quite usefull, implicits scare me, and type constructors have lots of power. A number of projects, such as and are split between github and google code – github for the src, and google code for the docs. Not sure I understand the motivation here. :-) it is big in all sorts of ways: The series covers<int>>>(); var oneBeat = Observable.Return<int>(1, _testSched).Timestamp(_testSched); var sWindow = oneBeat.ToSlidingWindow(_oneSecond, _oneSecond, _testSched); sWindow.Subscribe(slw => actual.Add(slw)); _testSched.RunTo(_testSched.FromTimeSpan(TimeSpan.FromSeconds(3))); Assert.Equal(2, actual.Count); Assert.Equal(1, actual[0].Added.Count()); Assert.Equal(1, actual[0].Current.Count()); Assert.Equal(0, actual[0].Removed.Count()); Assert.Equal(0, actual[1].Added.Count()); Assert.Equal(0, actual[1].Current.Count()); Assert.Equal(1, actual[1].Removed.Count()); } Code samples updated at Also - Jeffrey van Gogh promises more to come on #c9 I gave a presentation at today’s SQL Saturday in NY on replacing SSIS with PowerShell. You can view the presentation, or see below for the two second summary: The code is hosted at Currently it has the following capabilities Contact me if you’d like to contribute or collaborate on this.. The goal – a log a internet speeds, taken say every 15 min. Recording ping time, upload speed, download speed, and local LAN usage. The solution The code samples and PowerPoint deck from my presentation on the RX to the New York ALT.NET group are available (and updated) on MSDN Code samples: And the slide deck..."));. The Reactive Extensions for .Net offers plenty of ways to create IObservables Some primitives IObservable<int> obs = Observable.Empty<int>(); IObservable<int> obs = Observable.Return(0); IObservable<int> obs = Observable.Throw<int>(new Exception()); Simple streams IObservable<long> obs = Observable.Interval(new TimeSpan(0, 0, 1)); IObservable<long> obs = Observable.Timer(DateTimeOffset.Now.AddHours(1)); // Plus 7 more overloads IObservable<int> obs = Observable.Repeat(1); // Plus 7 more overloads IObservable<int> obs = Observable.Range(0, 1); From async data //From an Action or Func Observable.Start(() => 1); //From Task Task.Factory.StartNew(...).ToObservable(); //From AsyncPattern // typical use case is IO or Web service calls Func<int,int,double> sampleFunc = (a,b) => 1d; Func<int, int, IObservable<double>> funcObs = Observable.FromAsyncPattern<int, int, double>(sampleFunc.BeginInvoke, sampleFunc.EndInvoke); IObservable<double> obs = funcObs(1, 0); From Events public event EventHandler<EventArgs> AnEvent; IObservable<IEvent<EventArgs>> fromEventObs = Observable.FromEvent<EventArgs>(h => this.AnEvent += h, h => this.AnEvent -= h); IEnumerable<int> ie = new int[] {}; observable = ie.ToObservable(); By Generate() There are 20 overloads to generate. See some prior examples here By Create() This creates a cold stream IObservable<int> observable = Observable.Create<int>(o => { o.OnNext(1); o.OnNext(2); o.OnCompleted(); return () => { }; }); To make a hot stream via Create() List<IObserver<int>> _subscribed = new List<IObserver<int>>(); private CreateHot() { observable = Observable.Create<int>(o => { _subscribed.Add(o); return () => _subscribed.Remove(o); }); } private void onNext(int val) { foreach (var o in _subscribed) { o.OnNext(val); } } But rather than using create, a subject provides a cleaner (thread safe and tested) way of doing the above var subj = new Subject<int>(); observable = subj.Hide(); subj.OnNext(1););
http://weblogs.asp.net/sweinstein/
CC-MAIN-2014-15
refinedweb
808
50.23
Having promises and asynchronous code. Enter PromiseKit. While PromiseKit isn’t as intuitive as $q, it is every bit as capable and powerful but with its own quirks. The Basics Promises intend to solve the issues when invoking asynchronous code such as calling an API on a server. The http URL is invoked and, sometime later, a response is returned, whether success or failure, or the call hangs. When invoking asynchronous code without using promises, a callback function is passed along that will eventually be invoked with the response. This is fairly straight forward until several asynchronous calls need to be made sequentially or in parallel. Clean separation of success and error code is difficult as are other common tasks such as timing out and waiting for multiple calls to complete before proceeding. Promises solve these common issues and provide useful shortcuts for others. The basics of handling a promise are providing success and error handlers which are invoked if the asynchronous code succeeds, i.e. is fulfilled, or errors, i.e. is rejected. A promise gets resolved when it is either fulfilled or rejected. A promise that has not resolved is pending. Optionally, a finally block can be provided which executes regardless of whether the promise was fulfilled or rejected. Wrapping Asynchronous APIs Libraries that don’t utilize promises can be made to do so by wrapping the asynchronous code with a promise. Simply create a new promise that is in the pending state with the correct return type and resolve it based on the outcome of the asynchronous code. Allocating a new promise returns a tuple containing both a pending promise and a resolver. The new promise should be returned to the code invoking the asynchronous code and the resolver should be used to eventually resolve it. A promise can only be resolved once and, once resolved, remains so. import UIKit import Alamofire import PromiseKit struct User: Codable{ let id: Int let name: String let username: String let email: String } struct Post: Codable{ let id: Int let userId: Int let title: String let body: String } enum ApplicationError: Error { case noUsers case usersCouldNotBeParsed case postsCouldNotBeParsed } func getAllUsers() -> Promise{ let (promise, resolver) = Promise.pending() Alamofire.request(usersUrlString).responseJSON{ response in if let error = response.error{ resolver.reject(error) } if let data = response.data { do{ let users = try self.decoder.decode([User].self, from: data) resolver.fulfill(users) }catch{ resolver.reject(ApplicationError.usersCouldNotBeParsed) } }else{ resolver.reject(ApplicationError.noUsers) } } return promise } Then vs. Done The code receiving the newly created promise provides the handlers for the fulfilled and rejected cases. The first stumbling block for me was whether to use then or done as my fulfilled promise handler. Using the incorrect function results in some non-intuitive compiler errors about type mismatches. then is to be used when your promise handler returns a promise itself and done when it doesn’t. Essentially, use then when chaining promises and done when the chain ends Example of using done: getAllUsers() .done{ users -> Void in print("Promise users: \(users)") }.catch{error in print("Something went wrong: \(error)") } Example of using then and done: func getPosts(for userId:Int) -> Promise{ let (promise, resolver) = Promise.pending() let parameters = ["userId": userId] Alamofire.request(postsUrlString, parameters: parameters).responseJSON{ response in if let error = response.error{ resolver.reject(error) } if let data = response.data { do{ let posts = try self.decoder.decode([Post].self, from: data) resolver.fulfill(posts) }catch{ resolver.reject(ApplicationError.postsCouldNotBeParsed) } }else{ resolver.fulfill([Post]()) } } return promise } getAllUsers() .then{ users -> Promise in print("Promise users: \(users)") return self.getPosts(for: users[4].id) }.done{ posts -> Void in print("Posts for user: \(posts)") }.catch{error in print("Something went wrong: \(error)") } Map What to do when the promise returns type X but you need it to return type Y? JavaScript, not being typesafe, allows a mix and match of promise return types and it’s up to the developer to know what to expect. This is impossible with Swift’s strict type checking. Use the map extension function instead. Return a non-promise type from the promise handler and PromiseKit automatically wraps it in an appropriately typed promise. Example: func getUserName(for userId: Int) -> Promise{ return getAllUsers() .map{ users -> String in let user = users.first( where: {$0.id == userId} ) if let user = user { return user.username }else{ return "Not found" } } } Delay What to do when the server you’re communicating with isn’t keeping up with a chain of computationally-intensive calls or when a pause is needed when changing an element in the UI? Use the after convenience function. Example: after(seconds: 1).done{ print("1 second has passed") } Timeout What to do when that pesky server call hangs and there’s no hope of getting a reply? Use a delay with the race convenience function. The race convenience function invokes the promise handler as soon as any one of the provided promises either returns or rejects. Example: let timeout = after(seconds: 10) let getAllUsersPromise = getAllUsers() race(getAllUsersPromise.asVoid(), timeout.asVoid()) .done { if timeout.isResolved { print("getAllUsers() timed out") }else if let users = getAllUsersPromise.value{ print("Promise users: \(users)") } }.catch{error in print("Something went wrong: \(error)") } There are a few extras here that warrant explanation. First, there is asVoid(). All promises passed to race must return the same type so, when this isn’t possible normally, the asVoid() method changes the return type to Promise<Void>. Then in the promise handler, check the isResolved property of the timeout promise to see if the time specified has passed before the getAllUsersPromise could complete. Otherwise, retrieve the result of the getAllUsersPromise via its value property. When What to do when you need to make several calls in parallel? Use the when convenience function. PromiseKit’s when behaves differently than its $q cousin which returns an immediately fulfilled promise from a non-promise. PromiseKit’s when waits for all provided promises to fulfill or for the first one to reject. Please note that, if one promise rejects, the other promises may still fulfill or reject but these will be ignored. Example: let getAllUsersPromise = getAllUsers() let getAllPostsPromise = getAllPosts() when(fulfilled: [getAllPostsPromise.asVoid(), getAllUsersPromise.asVoid()]) .done{ _ in if let users = getAllUsersPromise.value, let posts = getAllPostsPromise.value { print("Promise users: \(users)") print("Promise posts: \(posts)") } }.catch{ error in if getAllPostsPromise.isRejected { print("getAllUsers() errored: \(error)") }else{ print("getAllPosts() errored: \(error)") } } Check the isRejected of each promise to find the one that failed, if necessary. What to do when you need to wait for all of the promises to resolve regardless if any have rejected? In that case, when also has you covered. This variation waits for all provided promises to either reject or fulfill and then invokes the promise handler. Example: let getAllUsersPromise = getAllUsers() let getAllPostsPromise = getAllPosts() when(resolved: [getAllPostsPromise.asVoid(), getAllUsersPromise.asVoid()]) .done{ results in var anyPromiseFailed = false results.forEach{ result in switch result { case .rejected(let error): print("Action partially failed: \(error)") anyPromiseFailed = true default: break } } if anyPromiseFailed { //handle the error }else if let users = getAllUsersPromise.value, let posts = getAllPostsPromise.value { print("Promise users: \(users)") print("Promise posts: \(posts)") } } The return value of when is an array of each promise’s results in the order in which they were passed. when(resolved:) never errors since the result of each promise, whether fulfilled or rejected, is passed to the promise handler. Conclusion I hope this helps you delve deeper into the PromiseKit toolbox and elucidates the most common use cases you’ll encounter. Please leave any questions, comments, or quemments below. In order to try out PromiseKit yourself with a generic JSON server, visit JSONPlaceholder.
https://chariotsolutions.com/blog/post/swift-4-and-promisekit/
CC-MAIN-2019-30
refinedweb
1,264
58.79
IntroductionEntity Framework is the Microsoft preferred method of data access for .NET applications. It supports strongly-typed access through LINQ. Entity Framework also allows developers to program against a conceptual model that reflects application logic rather than a relational model that reflects the database structure. Entity Framework can be used from an ASP.NET Application, (using Entity Data Source) or in ASP.NET MVC etc. In this article we will be creating this using MVC App and we'll be using Visual Studio 2012 for the demo.What is Code First Approach?The Code First Approach provides an alternative to the Database First and Model First approaches to the Entity Data Model and creates a database for us based on our classes that we will be creating in this article.Demo MVC ApplicationCreate a new ASP.NET MVC Project by New > Project > ASP.NET MVC 4 Web Application > Empty Template; you will get following structure:Use the following steps for the remainder of this demo.Step 1: Adding NuGet Package (if not available in references)Right-click on the References Folder and select "Manage NuGet Packages". Alternatively you can install it from Tools > Library Package Manager > Package Manager Console and type "Install-Package EntityFramework". Okay, let's do this from a NuGet Package window; type "Entity Framework" into the search box and click to install it. After installation you will see a new library file in the References Folder "EntityFramework".Step 2: Adding ClassesRight-click on the Models Folder to add a new class file named "Student.cs" and type the following code: using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace Code_First_Approach_in_Entity_Framework.Models { public class Student { public int StudentID { get; set; } public string Name { get; set; } public string Address { get; set; } public string Mobile { get; set; } } } So, I have declared four properties: StudentID, Name, Address and Mobile. Now go ahead and create a class to manage the link with the database the above properties by inheriting DbContext. Note to use the namespace System.Data.Entity.Step 3: Adding DbContextRight-click on the Models Folder to add a new class file named "StudentContext.cs" and enter the following code: using System.Data.Entity; public class StudentContext : DbContext public DbSet<Student> Students { get; set; } Now, this DbContext will do many things for us such as it will create databases and tables. You can check your newly added database file.Step 4: Adding Controller and ViewsBuild the solution here by menu Build > Build Solution because I want Model class and Data context class to be available for me in the Add Controller window. Go ahead and add a controller; to do that right-click on the Controllers Folder in the Solution Explorer and select Add > Controller and use the name "StudentController" and make the following selections as shown in the image below:Now, everything is setup and ready for us including the controller and its various views.Let's run the project. You will get an error page as given below; this is because the application is expecting "Home" controller by default and we don't have this controller instead we have a "student" controller. You will see how to fix this in the next step.For now request the "student" controller using an URL something like and try to perform CRUD operation.Everything is functioning well so far.Step 5: Routing ApplicationWhen you run the application, the very first thing you will get is an error page, because the application is expecting "Home" as a controller by default but we don't have a "Home" controller in our application instead we have "Student" so we need to modify the default hit by the application. For this, find a file "RouteConfig.cs" in the "App_Start" folder and make a single change.routes.MapRoute(name: "Default",url: "{controller}/{action}/{id}",defaults: new { controller = "student", action = "Index", id = UrlParameter.Optional });The only change is just to replace the controller value to "student". Now run the application again, you will not see that error page anymore.If you are interested to learning more about Routing, then go ahead and learn it from ScottGu's blog post here: or 6: Data Validation Using Data AnnotationsAssume, if you want to validate your data before sending it to the database or you want to define any validation expression or want to make any field as "Required". For this we can take advantage of the 'System.ComponentModel.DataAnnotations.dll' assembly shipped with the MVC package and you can find it in the References folder. Now to enable the validation, open the "Student.cs" model and add some validation lines, as in:If you run the application here, you will get the error message:Because, this class file directly represents the database table's structure and making any changes here is subject of 'Code First Migrations' to update database and its tables and that why you got above error. Remember, you will lose all your records if you migrate your database here. Still Microsoft is working on this subject to avoid data loss, hoping to see such functionality in coming releases. Let's move to next step to do database migration.If you want to learn more about "Data Validation Using Data Annotations" then please read: 7: Database MigrationNow, don't worry about data loss here and go ahead to update the database table to test the preceding validation rules.For this, open Global.asax file and make the following changes:You will see that the above defined validations are working now; the image is given below.If you want to learn more about "Code First Database Migration" then please read: 8: Looking at DatabaseSo, now if you want to see where the database has been created for this app. Open the directory where you have created your app and open the App_Data folder; you will find you database files.Now the question is how to change the database instance name, this is soo long here. Let's learn it in another step.Step 9: Change Database Name InstanceIn the above image you will see our default database name is "'Code_First_Approach_in_Entity_Framework.Models.StudentContext". Let's change it. Remember you will again lose your records.Open the "StudentContext.cs" file and make the following changes:Now, again check the 'App_Data' folder, you will notice new database files have been created.Find a very nice video on this title by Julie Lerman here have done a couple of video posts on MVC basics, if you are interested go and watch at my YouTube channel here hope you like this article. Thanks. View All
http://www.c-sharpcorner.com/UploadFile/abhikumarvatsa/code-first-approach-in-entity-framework/
CC-MAIN-2018-09
refinedweb
1,105
54.63
int printf( const char *format, ...); int fprintf( FILE *stream, const char *format, ...); int sprintf( char *str, const char *format, ...); #include <stdarg.h> int vprintf( const char *format, va_list ap); int vfprintf( FILE *stream, const char *format, va_list ap); int vsprintf( char *str, char *format, va_list ap);: In no case does a non-existent or small field width cause truncation of a field; if the result of a conversion is wider than the field width, the field is expanded to contain the conversion result. conversion formats %D, %O, and %U are not standard and are provided only for backward compatibility. These may not be provided under Linux.. Because sprintf and vsprintf assume an infinitely long string, callers must be careful not to overflow the actual space; this is often impossible to assure. Table of Contents
http://www.fiveanddime.net/man-pages/sprintf.3.html
crawl-003
refinedweb
134
63.19
So I've been working a bit, with an Else If statement and some boolean operators. Even though I've managed to get it 100% working, I'm not sure if I did anything sloppy, or if there is a more effective way of doing it. If anyone can take a look over my code and let me know, I'd really appreciate it. Code: #include <cstdlib> #include <iostream> using namespace std; int main(int argc, char *argv[]) { int age; //declaring our integer :O :O :O :O char test; //Variable containing ONE character. /* Comparisons evaluate out to a 0 or a 1 depending on whether its true or false. 0 = false, and 1 = true. Example: cout<< ( 2 == 1 ); evaluates to a 0 */ /* Not Operator: Not operator accepts one input, if its true then it returns false if its false it returns true. Not (any number != 0 = 0) Is written as (!) */ /* And operator: Returns true if both inputs are true (1), otherwise returns false (0). Written as &&. Evaluated before Or Operator. */ /* Or Operator: If either or borth of values are true, then it returns true. Otherwise returning false (0). Written as ||. Evaluated after And */ cout<<"Please input your age: "; // << for output cin>> age; // >> for input cin.ignore(); //Ignore enter. if (!(age == 100)) { // (!100) { //start block //If age < 100 then run the code! cout<<"You're not 100 years old!!\n"; } else if (age == 100) { cout<<"You're 100 years old!\n"; } cin.get(); //Pause the program until enter is pressed. }
http://cboard.cprogramming.com/cplusplus-programming/120417-work-boolean-operators-printable-thread.html
CC-MAIN-2014-42
refinedweb
249
75.81
Project CoreJava Project Hi Sir, I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account Corejava - Java Interview Questions corejava - Java Interview Questions faqs Java faqs Hello Java Developers, I am beginner in Java and trying to find the best java faqs. Where I can find java faqs? Thanks Hi, Please see the thread java faq Thanks corejava path pdf interview path pdf Plz send me the paths of java core questions and answers pdfs or interview questions pdfs... the interview for any company... the following link: Interview Questions and their Answers java - Java Interview Questions /interviewquestions/ Here you will get lot of interview questions...java hello sir this is suraj .i wanna ask u regarding interview... difficult 2 answer....so could u plz suggest me with sme information regarding tis Technology What is and FAQs Technology What is and FAQs AAC AAC, the abbreviation of Advanced Audio Coding, is a technique for compressing digital audio files java - Servlet Interview Questions java servlet interview questions Hi friend, For Servlet interview Questions visit to :: corejava - Java Beginners corejava pass by value semantics Example of pass by value semantics in Core Java. Hi friend,Java passes parameters to methods using pass by value semantics. That is, a copy of the value of each of the specified - Java Beginners CoreJava - Java Beginners Popup very urgent plz - JSP-Servlet ='' Its Very Urgent.. Please send me the coding asap.. Thanks/Regards help me in these - Java Interview Questions help me in these hello every body i have some question if you cam plz answer me it is important to me and these are the questions : 1)Write... use the corejava.* Hi Friend, 1) import java.util.*; public class plz plz plz inform me as soon as possible plz plz plz inform me as soon as possible ``by using c programs how to type IMP - Struts choices) with answers for struts. kindly send me the details its urgent...: Thanks MYSQL - Java Interview Questions MYSQL which book is learn in sql/plsql i want author plz post serv Technology Articles/FAQs Technology Articles/FAQs What is Technology? The Oxford Dictionary defines technology as the application of scientific knowledge for practical purposes, especially in industry. In other words, it can be said Java interview questions Java interview questions Plz answer the following questions..... The type long can be used to store values in the following range: a. -263 to 263 - 1 b. -231 to 231 - 1 c. -264 to 264 d. -232 to 232 - 1 Which GARBAGECOLLECTIONS - Java Interview Questions GARBAGECOLLECTIONS what are the algorithams of garbage collector?i want answer plz reply? Hi Friend, Please visit the following link: Thanks how to send message how to send message how to send message for mobile in server servlets - Servlet Interview Questions servlets How would you set an error message in the servlet,and send the user back to the JSPpage? Please give java or pseudocode examples
http://www.roseindia.net/tutorialhelp/comment/33870
CC-MAIN-2014-52
refinedweb
499
54.42
XSD, RELAX NG and Why We Didn't Ship System.Xml.IXmlType Tim Bray has a post entitled More Relax where he writes I often caution people against relying too heavily on schema validation. “After all,” I say, “there is lots of obvious run-time checking that schemas can’t do, for example, verifying a part number.” It turns out I was wrong; with a little extra work, you can wire in part-number validation—or pretty well anything else—to RelaxNG. Elliotte Rusty Harold explains how. Further evidence, if any were required, that RelaxNG is the world’s best schema language, and that anyone who who’s using XML but not RelaxNG should be nervous. Elliote Rusty Harold's article shows how to plug in custom datatype validation into Java RELAX NG validators. This enables one to enforce complex constraints on simple types such as such as "the content of an element is correctly spelled, as determined by consulting a dictionary file" or "the number is prime" to take examples from ERH's article. Early in the design of the version 2.0 of the System.Xml namespace in the .NET Framework we considered creating a System.Xml.IXmlType interface. This interface would basically represent the logic for plugging one's custom types into the XSD validation engine. After a couple of months and a number of interesting discussions between myself, Joshua and Andy we got rid of it. There were two reasons we got rid of this functionality. The simple reason was that we didn't have much demand for this functionality. Whenever we had people complaining about the limitations of XSD validation it was usually due to its inability to define co-occurence constraints (i.e. if some element or attribute has a certain value then the expected content should be blah) and other aspects of complex type validation than needing more finer grained simple type validation. The other reason was that the primary usage of XSD for many of our technologies is primarily as a type system not as a validation language. There's already the fact that XSD schemas are used to generate .NET Framework classes via the System.Xml.Serialization.XmlSerializer and relational tables via the System.Data.DataSet. However there were already impedence mismatches between these domains and XSD, for example if one defined a type as xs:nonNegativeInteger this constraint was honored in the generated C#/VB.NET classes created by the XmlSerializer or in the relational tables created by the DataSet. Then there was the additional wrinkle that at the time we were working on XQuery which used XSD as its typoe system and we had to factor in the fact that if people could add their own simple types we didn't just have to worry about validation but also how query operators would work on them. What would addition, multiplication or subtraction of such types mean? How would type promotion, casting or polymorphism work with some user's custom type defined outside the rules of XSD? Eventually we scrapped the class as having too much cost for too little benefit. This reminds me of Bob DuCharme's XML 2004 talk Documents vs. Data, Schemas vs. Schemas where he advised people on how to view RELAX NG and XSD. He advised viewing RELAX NG as a document validation language and considering XSD as a datatyping language. I tend to agree although I'd probably have injected something in there about using XSD + Schematron for document validation so one could get the best of both worlds.
https://docs.microsoft.com/en-us/archive/blogs/dareobasanjo/xsd-relax-ng-and-why-we-didnt-ship-system-xml-ixmltype
CC-MAIN-2020-24
refinedweb
595
54.22
Practical ASP.NET Azure Functions can be used to trigger event-driven Webhooks. Here’s how. More on This Topic: One of the features of Azure Functions is the ability to easily create Webhooks. Webhooks allow integration with other systems, including third-party systems. Essentially, the external system can call an Azure Function when an event happens; in this way, there’s no need to periodically poll an external system to look for changes. If an external system supports Webhooks, it can be configured to point to an Azure Functions Webhook (via HTTP) and call the endpoint with relevant data. Code inside the Azure Function can take this incoming data and perform processing. Some potential events that can trigger a Webhook: Azure Functions currently supports three types of Webhook function triggers: Creating a GitHub Webhook GitHub Webhooks allow a function to be notified on a wide range of events that occur in a GitHub repository, including: To create an Azure Functions GitHub Webhook, first create a Function App in the Azure Portal, click the New Function button and select the C# GitHub Webhook template, as shown in Figure 1. This will create a function with some starter code that attempts to extract a GitHub comment from the incoming data: using System.Net; public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log) { log.Info("C# HTTP trigger function processed a request."); // Get request body dynamic data = await req.Content.ReadAsAsync<object>(); // Extract github comment from request body string gitHubComment = data?.comment?.body; return req.CreateResponse(HttpStatusCode.OK, "From Github:" + gitHubComment); } You can write function code to extract different data items depending on what’s configured in GitHub. Configuring a GitHub Webhook Trigger In this example, the function will be called every time a new issue is added to the repository. The first step is to obtain the "GitHub secret" and the function URL for the Azure Function. To do this, click on the GitHub secret item and the function URL item in the Azure Portal function editor and take a copy of the secret and URL as in Figure 2. Next, head over to GitHub and navigate to the settings for the repository that will have a Webhook configured. In the settings for the repository, navigate to the Webhooks section and choose "Add webhook." Paste in the function URL and secret, and change the content type to "application/json," as shown in Figure 3. To specify that only specific events will trigger the Webhook, choose "let me select individual events" and then check only the "issues" checkbox. Finally, click "Add webhook" to create it. Creating Function Code to Extract Issue Data The (JSON) data payload that will be POSTed to the function URL depends on what events were selected. For example, the issues payload contains a JSON "title" property nested inside an "issue." The following code shows the extraction of the action type and the issue title: using System.Net; public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log) { log.Info("C# HTTP trigger function processed a request."); dynamic data = await req.Content.ReadAsAsync<object>(); string actionPerformed = data?.action; string title = data?.issue?.title; log.Info($"Issue {title} was {actionPerformed}"); return req.CreateResponse(HttpStatusCode.OK); } Testing the Webhook Now if a new issue (for example, with the title "Something’s Wrong") is added to the repository, the Webhook will be called and the function code executed. The function logs in the Azure Portal show that the Webhook was executed as expected with the following log messages: 2017-03-01T02:54:57.254 Function started (Id=1b6de019-a9a8-47f1-8ae5-317fdf9f0382) 2017-03-01T02:54:57.520 C# HTTP trigger function processed a request. 2017-03-01T02:54:57.817 Issue Something's Wrong was opened 2017-03-01T02:54:57.817 Function completed (Success, Id=1b6de019-a9a8-47f1-8ae5-317fdf9f0382) To learn more about Azure Functions, check out the documentation here and the series of articles on my blog here..
https://visualstudiomagazine.com/articles/2017/04/01/implementing-webhooks-azure-functions.aspx
CC-MAIN-2018-51
refinedweb
661
55.24
SYNOPSIS docker run []] [-d|--detach] [--detach-keys[=[]]] [--device[=[]]] [--device-read-bps[=[]]] [--device-read-iops[=[]]] [--device-write-bps[=[]]] [--device-write-iops[=[]]] [--dns[=[]]] [--dns-opt[=[]]] [--dns-search[=[]]] []] [--rm] [--security-opt[=[]]] [--stop-signal[=SIGNAL]] [--shm-size[=[]]] [--sig-proxy[=true]] [-t|--tty] [--tmpfs[=[CONTAINER-DIR[:<OPTIONS>]]] [-u|--user[=USER]] [--ulimit[=[]]] [--uts[=[]]] [-v|--volume[=[[HOST-DIR:]CONTAINER-DIR[:OPTIONS]]]] [--volume-driver[=DRIVER]] [--volumes-from[=[]]] [-w|--workdir[=WORKDIR]] IMAGE [COMMAND] [ARG...] DESCRIPTION Run a process in a new container. docker run starts a process with its own file system, its own networking, and its own isolated process tree. The IMAGE which starts the process may define defaults related to the process that will be run in the container, the networking to expose, and more, but docker run gives final control to the operator or administrator who starts the container from the image. For that reason docker run has more options than any other Docker command. If the IMAGE is not already loaded then docker run will pull the IMAGE, and all image dependencies, from the repository in the same way running docker pull IMAGE, before it starts the container from that image. OPTIONS -a, --attach=[] Attach to STDIN, STDOUT or STDERR. In foreground mode (the default when -d is not specified), docker run can start the process in the container and attach the console to the process’s standard input, output, and standard error. It can even pretend to be a TTY (this is what most commandline executables expect) and pass along signals. The -a option can be set for each of stdin, stdout, and stderr. --add-host=[] Add a custom host-to-IP mapping (host:ip) Add a line to /etc/hosts. The format is hostname:ip. The --add-host option can be set multiple times. --blkio-weight=0 Block IO weight (relative weight) accepts a weight value between 10 and 1000. --blkio-weight-device=[] Block IO weight (relative device weight, format: DEVICE_NAME:WEIGHT). --cpu-shares=0 CPU shares (relative weight) By default, all containers get the same proportion of CPU cycles. This proportion can be modified by changing the container's CPU share weighting relative to the weighting of all other running containers. To modify the proportion from the default of 1024, use the --cpu-shares flag to set the weighting to 2 or higher. The proportion will only apply when CPU-intensive processes are running. When tasks in one container are idle, other containers can use the left-over CPU time. The actual amount of CPU time will vary depending on the number of containers running on the system. For example, consider three containers, one has a cpu-share of 1024 and two others have a cpu-share setting of 512. When processes in all three containers attempt to use 100% of CPU, the first container would receive 50% of the total CPU time. If you add a fourth container with a cpu-share of 1024, the first container only gets 33% of the CPU. The remaining containers receive 16.5%, 16.5% and 33% of the CPU. On a multi-core system, the shares of CPU time are distributed over all CPU cores. Even if a container is limited to less than 100% of CPU time, it can use 100% of each individual CPU core. For example, consider a system with more than three cores. If you start one container {C0} with -c=512 running one process, and another container {C1} with -c=1024 running two processes, this can result in the following division of CPU shares: PID container CPU CPU share 100 {C0} 0 100% of CPU0 101 {C1} 1 100% of CPU1 102 {C1} 2 100% of CPU2 - Limit the container's CPU usage. This flag tell the kernel to restrict the container's CPU usage to the period you specify. - Limit the container's CPU usage. By default, containers run with the full CPU resource. This flag tell the kernel to restrict the container's CPU usage to the quota you specify. -d, --detach=true|false Detached mode: run the container in the background and print the new container ID. The default is false. At any time you can run docker ps in the other shell to view a list of the running containers. You can reattach to a detached container with docker attach. If you choose to run a container in the detached mode, then you cannot use the -rm option. When attached in the tty mode,. --detach-keys="" Override the key sequence for detaching a container. Format is a single character [a-Z] or ctrl-<value> where <value> is one of: a-z, @, ^, [, , or _. --device=[] Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm) --device-read-bps=[] Limit read rate from a device (e.g. --device-read-bps=/dev/sda:1mb) --device-read-iops=[] Limit read rate from a device (e.g. --device-read-iops=/dev/sda:1000) --device-write-bps=[] Limit write rate to a device (e.g. --device-write-bps=/dev/sda:1mb) --device-write-iops=[] Limit write rate a a device (e.g. --device-write-iops=/dev/sda:1000) --dns-search=[] Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain) --dns-opt=[] Set custom DNS options -. -e, --env=[] Set environment variables This option allows you to specify arbitrary environment variables that are available for the process that will be launched inside of the container. --entrypoint="" Overwrite the default ENTRYPOINT of the image This option allows you to overwrite the default entrypoint of the image that is set in the Dockerfile.. --env-file=[] Read in a line delimited file of environment variables --expose=[] Expose a port, or a range of ports (e.g. --expose=3300-3310) informs Docker that the container listens on the specified network ports at runtime. Docker uses this information to interconnect containers using links and to set up port redirection on the host system. --group-add=[] Add additional groups to run as -h, --hostname="" Container host name Sets the container host name that is available inside the container. Print usage statement -i, --interactive=true|false Keep STDIN open even if not attached. The default is false. When set to true,. -l, --label=[] Set metadata on the container (e.g., --label com.example.key=value) -. --label-file=[] Read in a line delimited file of labels --link=[] Add link to another container in the form of <name or id>:alias or just <name or id> in which case the alias will match the name If the operator uses --link when starting the new client container, then the client container can access the exposed port via a private networking interface. Docker will set some environment variables in the client container to help indicate which interface and port to use. -. --mac. --name=""”) The UUID identifiers come from the Docker daemon, and if a name is not assigned to the container with --name then the daemon will also generate a random string name. The name is useful when defining links (see --link) (or any other place you need to identify a container). This works for both background and foreground Docker containers. -. When set to true publish all exposed ports to the host interfaces. The default is false. If the operator uses -P (or -p) then Docker will make the exposed port accessible on the host and the ports will be available to any client that can reach the host. When using -P, Docker will bind any exposed port to a random port on the host within an ephemeral port range defined by /proc/sys/net/ipv4/ip_local_port_range. To find the mapping between the host ports and the exposed ports, use docker port. -p, --publish=[] Publish a container's port,., docker run -p 1234-1236:1222-1224 --name thisWorks -t busybox but not docker run -p 1230-1236:1230-1240 --name RangeContainerPortsBiggerThanRangeHostPorts -t busybox) With ip: docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT --name CONTAINER -t someimage Use docker port to see the actual mapping: docker port CONTAINER $CONTAINERPORT -. --uts=host Set the UTS mode for the container host: use the host's UTS namespace inside the container. Note: the host mode gives the container access to changing the host's hostname and is therefore considered insecure. --privileged=true|false Give extended privileges to this container. The default is false. By default, Docker containers are “unprivileged” (=false) and cannot, for example, run a Docker daemon inside the Docker container. This is because by default a container is not allowed to access any devices. A “privileged” container is given access to all devices. When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor to allow the container nearly all the same access to the host as processes running. --restart="no" Restart policy to apply when a container exits (no, on-failure[:max-retry], always, unless-stopped). --rm=true|false Automatically remove the container when it exits (incompatible with -d). The default is false. - "apparmor=unconfined" : Turn off apparmor confinement for the container "apparmor=your-profile" : Set the apparmor confinement profile for the container --stop-signal=SIGTERM Signal to stop a container. Default is SIGTERM. -. --sig-proxy=true|false Proxy received signals to the process (non-TTY mode only). SIGCHLD, SIGSTOP, and SIGKILL are not proxied. The default is true. --memory-swappiness="" Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. -t, --tty=true|false Allocate a pseudo-TTY. The default is false. When set to true Docker can allocate a pseudo-tty and attach to the standard input of any container. This can be used, for example, to run a throwaway interactive shell. The default is false. The -t option is incompatible with a redirection of the docker client standard input. -="". --ulimit=[] Ulimit options ] - [nocopy]) Mounts already mounted volumes from a source container onto another container. You must supply the source's container-id. To share a volume, use the --volumes-from option when running the target container. You can share volumes even if the source container is not running. By default, Docker mounts the volumes in the same mode (read-write or read-only) as it is mounted in the source container. Optionally, you can change this by suffixing the container-id with either the :ro or :rw keyword. If the location of the volume from the source container overlaps with data residing on a target container, then the volume hides that data on the target. -w, --workdir="" Working directory inside the container The default working directory for running binaries within a container is the root directory (/). The developer can set a different default with the Dockerfile WORKDIR instruction. The operator can override the working directory by using the -w option. Exit Status $? # exec: "/etc": permission denied docker: Error response from daemon: Contained command could not be invoked 126 127 if the contained command cannot be found $ docker run busybox foo; echo $? # exec: "foo": executable file not found in $PATH docker: Error response from daemon: Contained command not found or does not exist 127 Exit code of contained command otherwise $ docker run busybox /bin/sh -c 'exit 3' # 3 EXAMPLES Running containers image from modification. Read only containers may still need to write temporary data. The best way to handle this is to mount tmpfs directories on /run and /tmp. # docker run --read-only --tmpfs /run --tmpfs /tmp -i -t fedora /bin/bash Exposing log messages from the container to the host's log If you want messages that are logged in your container to show up in the host's syslog/journal then you should bind mount the /dev/log directory as follows. # docker run -v /dev/log:/dev/log -i -t fedora /bin/bash From inside the container you can test this by sending a message to the log. (bash)# logger "Hello from my container" Then exit and check the journal. # exit # journalctl -b | grep Hello This should list the message sent to logger. Attaching to one or more from STDIN, STDOUT, STDERR If you do not specify -a then Docker will attach everything (stdin,stdout,stderr) you’d like to connect instead, as in: # docker run -a stdin -a stdout -i -t fedora /bin/bash Sharing IPC between containers Using shm_server.c available here: <> Testing --ipc=host mode: Host shows a shared memory segment with 7 pids attached, happens to be from httpd: $ sudo ipcs -m ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x01128e25 0 root 600 1000 7 Now run a regular container, and it correctly does NOT see the shared memory segment from the host: $ docker run -it shm ipcs -m ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status Run a container with the new --ipc=host option, and it now sees the shared memory segment from the host httpd: $ docker run -it --ipc=host shm ipcs -m ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x01128e25 0 root 600 1000 7 Testing --ipc=container:CONTAINERID mode: Start a container with a program to create a shared memory segment: $ docker run -it shm bash $ sudo shm/shm_server $ sudo ipcs -m ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x0000162e 0 root 666 27 1 Create a 2nd container correctly shows no shared memory segment from 1st container: $ docker run shm ipcs -m ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status Create a 3rd container using the new --ipc=container:CONTAINERID option, now it shows the shared memory segment from the first: $ docker run -it --ipc=container:ed735b2264ac shm ipcs -m $ sudo ipcs -m ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x0000162e 0 root 666 27 1 Linking Containers Note: This section describes linking between containers on the default (bridge) network, also known as "legacy links". Using --link on user-defined networks uses the DNS-based discovery, which does not add entries to /etc/hosts, and does not set environment variables for discovery. The link feature allows multiple containers to communicate with each other. For example, a container whose Dockerfile has exposed port 80 can be run and named as follows: # docker run --name=link-test -d -i -t fedora/httpd A second container, in this case called linker, can communicate with the httpd container, named link-test, by running with the --link=<name>:<alias> # docker run -t -i --link=link-test:lt --name=linker fedora /bin/bash Now the container linker is linked to container link-test with the alias lt. Running the env command in the linker container shows environment variables with the LT (alias) context (LT_) # env HOSTNAME=668231cb0978 TERM=xterm LT_PORT_80_TCP=tcp://172.17.0.3:80 LT_PORT_80_TCP_PORT=80 LT_PORT_80_TCP_PROTO=tcp LT_PORT=tcp://172.17.0.3:80 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PWD=/ LT_NAME=/linker/lt SHLVL=1 HOME=/ LT_PORT_80_TCP_ADDR=172.17.0.3 _=/usr/bin/env When linking two containers Docker will use the exposed ports of the container to create a secure tunnel for the parent to access. If a container is connected to the default bridge network and linked with other containers, then the container's /etc/hosts file is updated with the linked container's name. Note Since Docker may live update the container’s /etc/hosts file, there may be situations when processes inside the container can end up reading an empty or incomplete /etc/hosts file. In most cases, retrying the read again should fix the problem. Mapping Ports for External Usage The exposed port of an application can be mapped to a host port using the -p flag. For example, a httpd port 80 can be mapped to the host port 8080 using the following: # docker run -p 8080:80 -d -i -t fedora/httpd Creating and Mounting a Data Volume Container Many applications require the sharing of persistent data across several containers. Docker allows you to create a Data Volume Container that other containers can mount from. For example, create a named container that contains directories /var/volume1 and /tmp/volume2. The image will need to contain these directories so a couple of RUN mkdir instructions might be required for you fedora-data image: # docker run --name=data -v /var/volume1 -v /tmp/volume2 -i -t fedora-data true # docker run --volumes-from=data --name=fedora-container1 -i -t fedora bash Multiple --volumes-from parameters will bring together multiple data volumes from multiple containers. And it's possible to mount the volumes that came from the DATA container in yet another container via the fedora-container1 intermediary container, allowing to abstract the actual data source from users of that data: # docker run --volumes-from=fedora-container1 --name=fedora-container2 -i -t fedora bash Mounting External Volumes To mount a host directory as a container volume, specify the absolute path to the directory and the absolute path for the container directory separated by a colon: # docker run -v /var/db:/data1 -i -t fedora bash You can override the default labeling scheme for each container by specifying the --security-opt flag. For example, you can specify the MCS/MLS level, a requirement for MLS systems. Specifying the level in the following command allows you to share the same content between containers. # docker run --security-opt label=level:s0:c100,c200 -i -t fedora bash An MLS example might be: # docker run --security-opt label=level:TopSecret -i -t rhel7 bash To disable the security labeling for this container versus running with the --permissive flag, use the following command: # docker: # docker run --security-opt label=type:svirt_apache_t -i -t centos bash Note: You would have to write policy defining a svirt_apache_t type. Setting device weight If you want to set /dev/sda device weight to 200, you can specify the device weight by --blkio-weight-device flag. Use the following command: # docker run -it --blkio-weight-device "/dev/sda:200" ubuntu Specify isolation technology for container (--isolation) This option is useful in situations where you are running Docker containers on Microsoft Windows. The --isolation <value> option sets a container's isolation technology. On Linux, the only supported is the default option which uses Linux namespaces. These two commands are equivalent on Linux: $ docker run -d busybox top $ docker run -d --isolation default busybox top On Microsoft Windows, can take any of these values: - default: Use the value specified by the Docker daemon's --exec-opt . If the daemon does not specify an isolation technology, Microsoft Windows uses process as its default value. - process: Namespace isolation only. - hyperv: Hyper-V hypervisor partition-based isolation. In practice, when running on Microsoft Windows without a daemon option set, these two commands are equivalent: $ docker run -d --isolation default busybox top $ docker run -d --isolation process busybox top If you have set the --exec-opt isolation=hyperv option on the Docker daemon, any of these commands also result in hyperv isolation: $ docker run -d --isolation default busybox top $ docker run -d --isolation hyperv busybox top HISTORY April 2014, Originally compiled by William Henry (whenry at redhat dot com) based on docker.com source material and internal work. June 2014, updated by Sven Dowideit <[email protected]> July 2014, updated by Sven Dowideit <[email protected]> November 2015, updated by Sally O'Malley <[email protected]>
http://manpages.org/docker-run
CC-MAIN-2020-50
refinedweb
3,227
51.18
Jordan Clark Member since Dec 11, 2008 - Profile: /members/620-jordan-clark.htm Recent Blog Comments By Jordan Clark Using A Name Suffix In ColdFusion's CFMail Tag Posted on Jan 15, 2013 at 6:34 AM Adobe should hire Ben as the official solution site for ColdFusion issues/questions, or at least throw some love his way. Thanks for saving me a bundle of time!... read more » Syncing %{REQUEST_URI} Behaviors In Apache mod_rewrite And Helicon Ape mod_rewrite Posted on Aug 24, 2012 at 3:12 PM Hey Ben, just noticed a little typo near the top of your article: "Image that you have an .htaccess file with the given rule:"... read more » Ask Ben: Hiding / Encrypting ColdFusion CFID And CFTOKEN Values Posted on Jun 20, 2007 at 9:12 PM persiste... read more » jQuery Custom Selectors - Holy Cow That Is So Badass! Posted on Feb 23, 2007 at 6:54 PM Hey Ben, cool for showing how easy it is to make a custom selector, but shouldn't you use a namespace to add custom/arbitrary attributes to existing tags? Otherwise your code wont validate as xhtml.... read more »
http://www.bennadel.com/members/620-jordan-clark.htm
CC-MAIN-2014-52
refinedweb
190
68.3
Hey guys so im working on the basics of game at the moment. don’t know anything about python was wondering if anyone had a script or tutorial to help me out. I need to run up to this ‘CUBE’ and when i’m near it, the ‘CUBE’ will travel down an axis away from me (or in a direction I choose) but when I move away from the ‘CUBE’ it will move back to its original location. I used some logic editing to make the ‘CUBE’ move away from me (using the near sensor) but I cannot get it to move back to is original location. any help would be greatly appreciated. Hi! i read about your idea, and tried to bring it in python. But i am a newcomer as well. It did not work out, i get an error. I am not sure about if i should post my stuff here, but if it is 100% crap, the big guys of blender game will tell us. I thought about using python to get the actual world position of the object, to get the defined original position and to get the distance to the player. If the player is near, nothing will happen. If it is far, the Object will get a force-push to move back to its original location. I fear the syntax is wrong and i fear it is to simple and there are better ways, but here is my (fail)attempt. before the code, i want to tell you what helped me: Tutorials of python are everywhere to find, but partly they confused me, because of different Blender versions and different code-use. Blenders Evolution speed is good, but cause problems for learning sometimes. I want to give you some Links to tutorials which i consider as helpful: - Monsters Basic guides, i should read that again too! - Blender API!!! The source of scripting power - that guy is great and has more helpful tutorials! - has some good tutorials, clear and easy to understand - another aspect which may help ok, heres the attempt: import bge def main(): # get the scene cont = bge.logic.getCurrentController() own = cont.owner #scene scene = bge.logic.getCurrentScene() #search for 'players'. If only one player, it could be adressed directly. List = [s for s in scene.objects if 'player' in s] if List : print(List, "check, list exists") #define the original position: original_valuex = 0 original_valuey = 0 original_valuez = 0 cubeposition_x = own.worldPosition.x #check if the player is near: distance = own.getDistanceTo(playerList[0]) #check if the distance is enough to trigger the "back to origin behavior" if distance >= 20: #check if values have changed / is not the original value if (cubeposition_x - original_valuex) > 0: #move it on the x axis in negative direction # apply force to x axis force = [ 20.0, 0.0, 0.0] # use local axis local = True # apply force own.applyForce(force, local) if (cubeposition_x - original_valuex) < 0: #move it on the x axis in positive direction #apply force to x axis force = [ -20.0, 0.0, 0.0] # use local axis local = True # apply force own.applyForce(force, local) #...same with y and z positions. main() i got an error, that “own” is not defined. Good Luck for your Solution! Thanks for tips the links r already helping me under stand all this its making reading it easier for sure! thanks for the links!!!
https://blenderartists.org/t/bge-moving-objects-in-game-help/567019
CC-MAIN-2020-45
refinedweb
564
72.36
Theodore Y. Ts'o <tytso@MIT.EDU>:> I don't think we need to solve the problem at all. See my previous> message to linux-kernel about why resource forks really don't buy you> much. They end up being too transparent to programs like, cp, mv, and> tar. > > Instead, we're much better off designing a high-level API (implemented> using a replaceable shared library) for storing and retrieving metadata> information (and a common metadata format which both KDE and GNOME> share!!!), and then having a shared library which implements the storage> of said metadata information via some non-kernel, non-FS means. If> later somoene wants to extend the filesystem to provide storing resource> forks, fine we can replace the shared library with one that makes the> non-standard, non-POSIX API calls, but I really don't see any value in> storing such information in this fashion over any one of the myriad of> schemes which don't require filesystem support.> Do it as in the high-level interface, where it belongs.Ted's points are well taken. I'll add a few more. 2. The right way to think about the Unix file system as it is is that it's just a namespace manager -- a mechanism that takes pathnames and gives back byte streams. Resource forks at fs level would be bad design because they would complicate that abstraction. This is a good reason to avoid doing them unless there's some overwhelming and obvious gain to be had for the complexity added -- and I don't see one. 3. This kind of level-mixing mistake has already been made once. System- V-style file locks should never have been implemented in the kernel; it would have sufficed to implement them via a shared library with a few tricks. Instead we got kernel bloat. Let's not make this error again.(Yes, I've been lurking here for a while. I just never had anything I neededto say before.)-- .rutgers.eduPlease read the FAQ at
http://lkml.org/lkml/1998/9/1/214
CC-MAIN-2013-20
refinedweb
340
71.65
Red Hat Bugzilla – Bug 204920 memleak in db4 Last modified: 2013-07-02 19:17:28 EDT Description of problem: memleak on db->open Version-Release number of selected component (if applicable): db4-4.3.29-6 How reproducible: Always Steps to Reproduce: #include <db.h> int main(void) { DB *dbp; if (0 == db_create(&dbp, 0, DB_CXX_NO_EXCEPTIONS)) { /* args 2 is DB_TXN *, 0 is valid from docs, but appears to cause a mem leak */ dbp->open(dbp,0,"/no-exist/but/irrelevent",0,DB_BTREE, DB_RDONLY,0644); dbp->close(dbp,0); } return 0; } gcc db4test.c -lpthread -ldb valgrind --leak-check=full ./a.out Actual results: ==2837== 132 bytes in 1 blocks are definitely lost in loss record 1 of 1 ==2837== at 0x40203C6: malloc (vg_replace_malloc.c:149) ==2837== by 0x638260E: __os_malloc (in /lib/libdb-4.3.so) ==2837== by 0x6382956: __os_calloc (in /lib/libdb-4.3.so) ==2837== by 0x6390FDB: __xa_get_txn (in /lib/libdb-4.3.so) ==2837== by 0x6391C5D: (within /lib/libdb-4.3.so) ==2837== by 0x80484A4: main (in /home/caolan/a.out) Expected results: no leak Additional info: related to 2nd arg of open Are you sure that usage of DB_CXX_NO_EXCEPTIONS is intentional? db_create supports only 0 or DB_XA_CREATE flags according to documentation, luckily DB_CXX_EXCEPTIONS and DB_XA_CREATE are equivalent (==2). The memleak was caused by broken SET_TXN macro in xa_db.c and occurs only when db->open()ing with txnid == NULL. Yeah, thanks, that looks dubious alright. db4-4.3.29-7.fc5 has been pushed for fc5, which should resolve this issue. If these problems are still present in this version, then please make note of it in this bug report.
https://bugzilla.redhat.com/show_bug.cgi?id=204920
CC-MAIN-2018-13
refinedweb
274
58.69
Opened 6 months ago Closed 6 months ago Last modified 6 months ago #29016 closed Bug (fixed) Reuse of UpdateQuery breaks some delete updates Description (last modified by ) On a model A, when deleting a foreign key pointing to a model B, some other foreign key of the model A pointing to the same model B may be nullified. I have isolated this behaviour on a simple project: models.py: from django.db import models class ChildModel(models.Model): name = models.CharField(max_length=200) class ParentModel(models.Model): name = models.CharField(max_length=200) child_1 = models.ForeignKey(ChildModel, on_delete=models.SET_NULL, related_name='parents_1', null=True) child_2 = models.ForeignKey(ChildModel, on_delete=models.SET_NULL, related_name='parents_2', null=True) Django shell session: from testapp.models import ParentModel, ChildModel child_1 = ChildModel.objects.create(name="child_1") child_2 = ChildModel.objects.create(name="child_2") parent_1 = ParentModel.objects.create(name="parent 1", child_1=child_1, child_2=child_2) parent_2 = ParentModel.objects.create(name="parent 2", child_1=child_2, child_2=child_1) child_1.delete() parent_1 = ParentModel.objects.get(pk=parent_1.pk) parent_2 = ParentModel.objects.get(pk=parent_2.pk) # parent_1.child_2 and parent_2.child_1 should be normaly equal to child_2 but... parent_1.child_2 is not None and parent_2.child_1 is not None # False is returned This simple project has been tested on an SQLite database. The same behaviour has been first discovered on a PostgreSQL database. A mis-reuse of an UpdateQuery seems to be the cause of this bug. After search on the django bug tracker I have found another issue with the same patch attached #28099. I have opened this new ticket because the issue seems to be more severe (I have experienced large data loss) and more general. This issue has been found on version 1.11 and 2.0 of Django. I have created a new branch on my github account with patch and test associated: Attachments (1) Change History (9) Changed 6 months ago by comment:1 Changed 6 months ago by comment:2 Changed 6 months ago by comment:3 Changed 6 months ago by I think that qualifies for a backport based on the "data loss" criteria. Rather than adding entirely new models, can you reuse an existing model (perhaps another ForeignKey will need to be added on an existing model) either in delete or delete_regress? comment:4 Changed 6 months ago by I have changed a model in delete_regress to add two foreign keys (it seems clearer to me). I have sent a pull request for the master branch. If it really qualifies to a backport (I agree with this qualification), how can I proceed to propose a patch? I miss a proper documentation. comment:5 Changed 6 months ago by Looks good, I made some cosmetic edits and added release notes. Simple patch version (no regression test yet)
https://code.djangoproject.com/ticket/29016
CC-MAIN-2018-30
refinedweb
462
51.95
THE SQL Server Blog Spot on the Web The summer is almost over and while we are working on new content (books and other for), I already have the plans for this Autumn’s conferences. If you are interested in attending the PASS Summit 2016, don’t miss 24 hours of PASS (live online, September 7-8, 2016), I will preview the full-day seminar about Power BI on 07 Sep 2016 21:00 GMT. This event is free, you just have to register, and there are many other interesting sessions to watch. I and Alberto Ferrari will also also have a number of public trainings: The course about Analysis Services Tabular Workshop is renewed and updated to Analysis Services 2016. The one in Amsterdam will be the first delivery in a public classroom, depending on the demand, we will propose new dates in 2017. See you around the world!
http://sqlblog.com/blogs/marco_russo/archive/2016/08/23/upcoming-conference-speeches-and-workshops-in-2016-ssas-tabular-dax-powerpivot-powerbi.aspx
CC-MAIN-2018-30
refinedweb
150
63.83
A Python library for automating web browsers Helium Helium is a Python library for automating web browsers. For example: Under the hood, Helium forwards each call to Selenium. The difference is that Helium's API is much more high-level. In Selenium, you need to use HTML IDs, XPaths and CSS selectors to identify web page elements. Helium on the other hand lets you refer to elements by their user-visible labels. As a result, Helium scripts are 30-50% shorter than similar Selenium scripts. What's more, they are easier to read and more stable with respect to changes in the underlying web page. Because Helium is simply a wrapper around Selenium, you can freely mix the two libraries. For example: # A Helium function: driver = start_chrome() # A Selenium API: driver.execute_script("alert('Hi!');") So in other words, you don't lose anything by using Helium over pure Selenium. In addition to its high-level API, Helium simplifies further tasks that are traditionally painful in Selenium: - Web driver management: Helium ships with its own copies of ChromeDriver and geckodriver so you don't need to download and put them on your PATH. - iFrames: Unlike Selenium, Helium lets you interact with elements inside nested iFrames, without having to first "switch to" the iFrame. - Window management. Helium notices when popups open or close and focuses / defocuses them like a user would. You can also easily switch to a window by (parts of) its title. No more having to iterate over Selenium window handles. - Implicit waits. By default, if you try click on an element with Selenium and that element is not yet present on the page, your script fails. Helium by default waits up to 10 seconds for the element to appear. - Explicit waits. Helium gives you a much nicer API for waiting for a condition on the web page to become true. For example: To wait for an element to appear in Selenium, you would write: With Helium, you can write: element = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.ID, "myDynamicElement")) ) wait_until(Button('Download').exists) Installation To get started with Helium, you need Python 3 and Chrome or Firefox. If you already know Python, then the following command should be all you need: pip install helium Otherwise - Hi! I would recommend you create a virtual environment in the current directory. Any libraries you download (such as Helium) will be placed there. Enter the following into a command prompt: python3 -m venv venv This creates a virtual environment in the venv directory. To activate it: # On Mac/Linux: source venv/bin/activate # On Windows: call venv\scripts\activate.bat Then, install Helium using pip: python -m pip install helium Now enter python into the command prompt and (for instance) the commands in the animation at the top of this page ( from helium import *, ...). Your first script I've compiled a cheatsheet that quickly teaches you all you need to know to be productive with Helium. API Documentation If you use an IDE such as PyCharm, you should get auto-completion and documentation for Helium's various functions. Otherwise, please look at this Python file. It lists all of Helium's public functions. I have not yet had time to bring this into a more readable state, sorry. Status of this project I have too little spare time to maintain this project for free. If you'd like my help, please go to my web site to ask about my consulting rates. Otherwise, unless it is very easy for me, I will usually not respond to emails or issues on the issue tracker. I will however accept and merge PRs. So if you add some functionality to Helium that may be useful for others, do share it with us by creating a Pull Request. For instructions, please see below. Contributing Pull Requests are very welcome. Please follow the same coding conventions as the rest of the code, in particular the use of tabs over spaces. Before you submit a PR, ensure that the tests still work: python setup.py test This runs the tests against Chrome. To run them against Firefox, set the environment variable TEST_BROWSER to firefox. Eg. on Mac/Linux: TEST_BROWSER=firefox python setup.py test On Windows: set TEST_BROWSER=firefox python setup.py test If you do add new functionality, you should also add tests for it. Please see the tests/ directory for what this might look like.
https://pythonawesome.com/a-python-library-for-automating-web-browsers/
CC-MAIN-2020-16
refinedweb
738
56.35
BBC micro:bit 16x2 Character LCD Display Introduction LCD character displays are a useful way to get output from a microcontroller. The one used on this page is a fairly standard size, able to display 2 rows of 16 characters. The advantage we have over the LED matrix is that we don't need to have our text scrolling across the screen. This allows us to display things like sensor readings so that we can see them change in real time. The most common standard for the controllers built into these displays is the Hitachi HD44780 LCD controller protocol. This is a parallel interface (we need to send signals down several wires at the same time). For many microcontrollers that have been around longer than the micro:bit (like Arduino), there are libraries to help you interact with common hardware like this. Although no such libraries exist for the micro:bit, I have been able to make a rough conversion of the Arduino library code for use with the micro:bit. The LCD display that I am using is the Sparkfun 3.3V Basic 16x2 Character LCD Display. It looks like this, Circuit The HD44780 LCD controller interface is made up of 16 connections, not all of which are needed to communicate with the display. On most modules, these are broken out to 0.1 inch spaced pins and are in the following order, We will need to connect up LCD pins 1 & 2 to our power pins on the micro:bit as well as connecting LCD pins 15 & 16 up to the power to make the backlight work. The resistor for this light is built into the display module. The VO pin is needed and is best connected using a variable resistor (potentiometer) so that the contrast can be adjusted easily. The RS pin needs to connect to a GPIO pin. It is set to LOW when we send commands to the display (like clearing the screen or moving the cursor) and set to 1 when we are sending character data. The RW pin is set to LOW when writing to the LCD and set to HIGH when reading. We don't need to read from the display so we can save ourselves a GPIO pin by connecting this pin directly to GND. The CLOCK or ENABLE pin must be connected to a GPIO pin. We pulse this pin when we send commands or data to the display. There are 8 data pins. We can operate the display in either 8 bit or 4 bit mode. 4 bit mode means 4 rather than 8 connections. When we use 4 bit mode, we only connect up D4 - D7 to our GPIO pins. This does mean that we will be sending our bytes in two parts and will be a little slower when updating the display but will save us 4 GPIO connections. Here it is on a breadboard, The potentiometer is a 10K Ohm linear potentiometer. 10K or 20K is recommended on the datasheet for these displays. Programming The micro:bit hasn't been around long enough for there to be many examples of how to connect to these or any hardware libraries that provide simple functions for you to use. This has been converted from the open source Arduino library with the timings for delays rounded up. This makes the communication a little slower than can be achieved but more than good enough for most uses you want to make of the LCD. I have left in most of the constants from the library. If you explore a little, you will be able to use these to make use of more of the built-in functionality of the display (like blinking and scrolling text). from microbit import * # pin connections rs = pin0 enable = pin1 datapins = [pin8, pin12, pin2, pin13] # def InitDisplay(): # at least 50ms after power on sleep(50) # send rs, enable low - rw is tied to GND rs.write_digital(0) enable.write_digital(0) write4bits(0x03) sleep(5) write4bits(0x03) sleep(5) write4bits(0x03) sleep(2) write4bits(0x02) send(LCD_FUNCTIONSET | 0x08, 0) sleep(5) send(LCD_FUNCTIONSET | 0x08, 0) sleep(2) send(LCD_FUNCTIONSET | 0x08, 0) sleep(2) send(LCD_FUNCTIONSET | 0x08, 0) sleep(2) send(LCD_DISPLAYCONTROL | LCD_DISPLAYON | LCD_CURSOROFF | LCD_BLINKOFF,0) clear() send(LCD_ENTRYMODESET | LCD_ENTRYLEFT | LCD_ENTRYSHIFTDECREMENT,0) # high level commands def clear(): send(LCD_CLEARDISPLAY,0) sleep(2) def home(): send(LCD_RETURNHOME,0) sleep(2) def setCursor(col, row): orpart = col if row>0: orpart = orpart + 0x40 send(LCD_SETDDRAMADDR | orpart, 0) def showText(t): for c in t: send(ord(c), 1) # mid and low level commands def send(value, mode): rs.write_digital(mode) write4bits(value>>4) write4bits(value) def pulseEnable(): enable.write_digital(0) sleep(1) enable.write_digital(1) sleep(1) enable.write_digital(0) sleep(1) def write4bits(value): for i in range(0,4): datapins[i].write_digital((value>>i) & 0x01) pulseEnable() # Test InitDisplay() showText("Hello") setCursor(0,1) showText("World") sleep(5000) clear() showText("How are you?") The little section at the end is the most important for you. The 4 'high level' functions are the ones that you will use to interact with the display after you have called the InitDisplay() function once at the start of your program. Some parts of the Arduino library send the same command several times - I have copied the approach taken there and things work - that was enough for me. Challenges - Output hardware like this can be put to many uses. Anything that you display by scrolling across the LEDs will be easier to read on this display. That might be the score for a game or the reading you are getting from a sensor connected on a GPIO pin. - Use the display to make a stopwatch. Use running_time rather than delays to control your timings and think carefully about how often you update the display.
http://www.multiwingspan.co.uk/micro.php?page=lcd
CC-MAIN-2019-47
refinedweb
970
69.52
OK, so I am in my 1st semester of C++. I have already finished my 1st assignment (below), a Windows32 Console Application and it works just fine. However, if I write the program as a Windows Forms Application with a GUI, then I can earn extra credit. My problem is that I can't seen to output the assignment to a label. I have created the GUI, but only being in my 1st week of class, I am unaware how to output the same info that the Console Application displays in a label box in the GUI in a Windows Form. I need to display the following 4 items. Any help would be greatly appreciated! // Restaurant Bill.cpp : Diplays the meal cost, tax amount, tip amount, and total bill on the screen // #include "stdafx.h" #include <iostream> using namespace std; int main() { double meal = 44.50; double tax = meal * 0.0675; double subtotal = meal + tax; double tip = subtotal * 0.15; double total = subtotal + tip; cout <<"Meal cost: " << meal << endl; cout <<"Tax Amount: " << tax << endl; cout <<"Tip Amount: " << tip << endl; cout <<"Total: " << total << endl; return 0; }
https://www.daniweb.com/programming/software-development/threads/406800/need-to-convert-a-console-application-to-a-windows-forms-application
CC-MAIN-2017-47
refinedweb
185
73.27
fredrikj.net / blog / Coping with a big argument March 26, 2013 Implementing special functions so that they work properly on the entirety of their (representable) domain is a very hard problem. Take for example the digamma function: it behaves like $\psi(z) \sim \log z$, so you would never really expect it to overflow. However, Mathematica (as of version 9.0.1) runs into problems even with a rather small input: In[2]:= N[PolyGamma[-10 + 10^7 I]] Out[2]= 16.1181 + 1.5708 I In[3]:= N[PolyGamma[-10 + 10^8 I]] Out[3]= 18.4207 + 1.5708 I In[4]:= N[PolyGamma[-10 + 10^9 I]] General::ovfl: Overflow occurred in computation. General::ovfl: Overflow occurred in computation. Out[4]= Indeterminate It only throws a worse tantrum if you increase the precision: In[5]:= N[PolyGamma[-10 + 10^9 I], 1000] General::ovfl: Overflow occurred in computation. General::ovfl: Overflow occurred in computation. General::ovfl: Overflow occurred in computation. General::stop: Further output of General::ovfl will be suppressed during this calculation. N::meprec: Internal precision limit $MaxExtraPrecision = 50. reached while evaluating PolyGamma[0, -10 + 1000000000 I]. Out[5]= Indeterminate Strangely enough, it works fine if the -10 is removed: In[6]:= N[PolyGamma[10^9 I]] Out[6]= 20.7233 + 1.5708 I In[7]:= N[PolyGamma[10^30 I]] Out[7]= 69.0776 + 1.5708 I The probable explanation is that when the real part is negative, Mathematica uses the functional equation $$\psi(1 – z) – \psi(z) = \pi\,\!\cot{ \left ( \pi z \right ) }$$ and isn’t clever enough about evaluating the cotangent. Indeed, In[9]:= Cot[N[Pi (-10 + 10^9 I)]] General::ovfl: Overflow occurred in computation. Out[9]= Indeterminate When $b$ is large, the real and imaginary parts of $\cot(a+bi)$ behave like $O(e^{-2b})$ and $-1 + O(e^{-2b})$ respectively. Most likely, Mathematica evaluates the cotangent using a formula such as $$\cot(a+bi) = -\frac{\sin (2 a)}{\cos (2 a)-\cosh (2 b)}+\frac{i \sinh (2 b)}{\cos (2 a)-\cosh (2 b)}$$ where if $b$ is huge, the hyperbolic functions overflow, even though the quotient should come out as something small. We find the same problem in other systems, such as Pari: ? psi(-10 + 10^6 * sqrt(-1)) %1 = 13.81551055801935743743825707 + 1.570806826794896234231321717*I ? psi(-10 + 10^9 * sqrt(-1)) *** at top-level: psi(-10+10^9*sqrt(-1 *** ^-------------------- *** psi: the PARI stack overflows ! current stack size: 4000000 (3.815 Mbytes) [hint] you can increase GP stack with allocatemem() *** Break loop: type 'break' to go back to GP If we try the evaluation in mpmath, it works a little better: >>> print digamma(-10 + 1e9j) (20.7232658369464 + 1.5707963372949j) It also works with a much bigger imaginary part, but here it takes a long time to complete: >>> print digamma("-10 + 1e1000000j") (2302585.09299405 + 1.5707963267949j) In fact mpmath also uses a naive formula for the cotangent: the difference is just that it has arbitrary-precision exponents. The reason it gets slow is that in order to compute the exponential function with something of order $10^{{10^6}}$ as input, it needs some $10^6$ digits of $\log 2$ for argument reduction, and outputs a number with roughly as many digits in the exponent. It works, but it’s not very practical, and you’re obviously out of luck if you kick the imaginary part up to $10^{10^{15}}$. The best solution is to evaluate the cotangent using different formulas depending on the location in the complex plane. For example, the imaginary part of $\cot(a+bi)$ can be written as $$\frac{e^{4b}-1}{1 – 2 e^{2b} \cos(2a) + e^{4b}}.$$ If $b$ is a large negative number, the exponentials become tiny (for large positive $b$ one just rewrites the formula in terms of $e^{-4b}$ and $e^{-2b}$ instead). I’ve recently been implementing more elementary functions in Arb, paying attention to such issues. Testing the digamma function, it works as expected. The code #include "fmpcb.h" int main() { fmpcb_t x, y; fmpcb_init(x); fmpcb_init(y); long prec = 53; fmpcb_set_ui(x, 10); fmpcb_pow_ui(x, x, 1000000, prec); fmpcb_mul_onei(x, x); fmpcb_sub_ui(x, x, -10, prec); fmpcb_digamma(y, x, prec); fmpcb_printd(y, 15); printf("\n"); fmpcb_clear(x); fmpcb_clear(y); } outputs: (2302585.09299405 + 1.5707963267949j) +/- (1.58e-11, 2.67e-17j) The computation just takes some 30 microseconds, so it’s no slower than with a tiny argument. In fact, I have now fixed almost all functions in Arb to work properly with huge exponents. The only missing function is binary-to-decimal conversion. To illustrate, here are the successive values of $x$ if you set $x = 1/2$ and then repeatedly compute $x = \exp x$ at 53-bit precision: (7425180500362907 * 2^-52) +/- (1 * 2^-52) (5855046294128313 * 2^-50) +/- (617423411 * 2^-78) (6380028057516027 * 2^-45) +/- (941782241 * 2^-71) (6853548015219477 * 2^209) +/- (358430563 * 2^192) (+inf) +/- (+inf) (+inf) +/- (+inf) When the output becomes expensive (or even physically impossible) to represent, the exponential function of a positive argument automatically overflows to an infinite interval. The maximum allowed exponent is not fixed but varies roughly proportionally with the precision. At 150-bit precision, it becomes possible to do one more iteration: (588283407380499048463925620267632796905727263 * 2^-148) +/- (1 * 2^-149) (231942279659869632990725740596674949038034297 * 2^-145) +/- (617423411 * 2^-175) (1010955799572893508099758397146956100274197425 * 2^-142) +/- (941782241 * 2^-168) (1085988031898535876444393234034937166751112959 * 2^112) +/- (358430563 * 2^95) (1334524050391896112657493127448630305501140025 * 2^8135028756631480222606112090277357169611889599342489768863892054745940119766089) +/- (383295431 * 2^8135028756631480222606112090277357169611910086956529576124135528869507204498460) (+inf) +/- (+inf) Conversely, with a too big negative number as input, the exponential function outputs something like $0 \pm 2^{-n}$ where $n$ is roughly proportional to the working precision. The output has very poor relative accuracy, but it’s perfectly useful as a magnitude bound. This nice behavior is exploited in the functional equation for the digamma function. Likewise, for trigonometric functions, if the real part of the input is larger than $2^n$ where $n$ roughly is proportional to the precision, an output similar to $0 \pm 1$ is produced immediately, rather than internally trying to compute $n$ bits of $\pi$ for the argument reduction. It’s a bit inconvenient to not always be guaranteed the best floating-point result, for example if asking for a 53-bit value of $\cos(10^{100000})$. However, such situations can be dealt with by repeatedly increasing the precision until the output is accurate. Of course, the point of automatic error propagation is that it allows doing such things robustly. What we gain is that the worst-case cost of an evaluation becomes predictable, and we don’t have to fear that the computation might hang or crash just because of large input (the worst thing that can happen is that the output bounding interval never improves to anything better than $0 \pm \infty$).
http://fredrikj.net/blog/2013/03/coping-with-a-big-argument/
CC-MAIN-2017-13
refinedweb
1,123
51.78
30 August 2011 17:11 [Source: ICIS news] LONDON (ICIS)--Leading politicians in ?xml:namespace> Patrick Doring, parliamentary vice chair of the Free Democrat party, said the government’s E10 policy has failed. In an interview with regional daily Passauer Neue Presse, Doring called on the government to promote biodiesel, instead of bioethanol, in order to comply with EU fuel and renewable targets. Meanwhile, the Free Democrat's parliamentary chair, Rainer Bruderle, told Saarbrucker Zeitung that E10 needs to be “discussed once more.” A high-profile summit with industry leader earlier this year to promote E10 had not yielded the results Bruderle had expected, the former economics minister added. The comments came after an oil industry executive said last week that Germany’s refiners could face penalties of up to €400m ($580m) from the troubled E10 launch, which they would seek to pass on to drivers. Refiners launched E10 because it enables them to comply with EU fuel quality standards and quotas. If they fail to comply, they face penalties. MWV is confident that E10 future sales will rise, spokeswoman Karin Retzlaff said. However, so far, Retzlaff added that refiners would seek to include all costs related E10 into gasoline prices at the pump. ($1 = €0.69)
http://www.icis.com/Articles/2011/08/30/9488813/germany-politicans-spark-new-e10-debate-but-refiners-still-confident.html
CC-MAIN-2015-14
refinedweb
207
59.74
My (different) data didn't start/end at the same date. Hi, I work on many different stock data. And it not start/end at the same date. When I add all of stock data it'll show only overlap date. how to deal with it ? Thanks. - backtrader administrators last edited by With no code, no output, no chart ... the question would be what's the actual meaning of show in: When I add all of stock data it'll show only overlap date Oh, I'm so sorry. Here is my code. I just add 2 data (Companies enter the market at 2009-09-02 for AAV symbol and 2012-05-31 for ADVANC symbol) into cerebro and try to run example strategy to print out the data(Close). I noticed that Backtest is start at 2012-05-31. So I can't access data before 2012-05-31. The question is how to add all the data without cut any row off. Thanks import backtrader as bt import os import sys import datetime import pandas as pd # Create a Stratey class TestStrategy(bt.Strategy): def log(self, txt, dt=None): ''' Logging function fot this strategy''' dt = dt or self.datas[1].datetime.date(0) print('%s, %s' % (dt.isoformat(), txt)) def __init__(self): # Keep a reference to the "close" line in the data[0] dataseries self.dataclose = self.datas[1].close def next(self): # Simply log the closing price of the series from the reference self.log('Close, %.2f' % self.dataclose[0]) if __name__ == '__main__': cerebro = bt.Cerebro() # Add a strategy cerebro.addstrategy(TestStrategy) # Datas are in a subfolder of the samples. Need to find where the script is # because it could have been called from anywhere ATSET100UNIV = [ 'AAV', # It start at 2012-05-31 'ADVANC', # It start at 2009-09-02 ] datapath = '/Users/tempurak/Work/ls-eq/data/' symbols = os.listdir(datapath) symbols.remove('.DS_Store') for file in symbols: # Create a Data Feed if file[:-4] in ATSET100UNIV: data = bt.feeds.GenericCSVData( dataname = datapath + file, fromdate=datetime.datetime(2000, 1, 1), todate=datetime.datetime(2017, 2, 20), nullvalue = 0.0, dtformat = ('%Y-%m-%d'), datetime = 0, open = 1, high = 2, low = 3, close = 4, volume = 5, openinterest = -1) cerebro.adddata(data) # Set our desired cash start cerebro.broker.setcash(100000.0) cerebro.addanalyzer(bt.analyzers.PyFolio, _name='pyfolio') print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue()) cerebro.run() print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue()) The Output Starting Portfolio Value: 100000.00 2012-05-31, Close, 3.70 2012-06-01, Close, 3.28 2012-06-05, Close, 3.12 2012-06-06, Close, 3.30 - backtrader administrators last edited by backtrader This is by design. Each object has a minimum period and while that minimum period is not met, the objects dependent on those cannot go into nextand remain in next. For data feeds the minimum period is simply1`, i.e.: they have started delivering data. Being the reason as simple as: def next(self): ... print(self.data0.close[0]) # ok ... starts earlier as self.data1 and has delivered print(self.data1.close[0]) # NOK ... kaboom ... exception raised if data1 hadn't started to deliver but it has ... ... The idea is that code NEVER breaks inside next. There is no code guard around self.data1checking if it has already delivered, because it has. It is guaranteed in nextthat all objects already deliver. See: You can always forward each call to prenextto next, but you will have to manually check the length of data1(and associated indicators) to make sure the code doesn't break. def next(self): ... print(self.data0.close[0]) # ok ... starts earlier as self.data1 and has delivered if len(self.data1): print(self.data1.close[0]) # guard placed because it may not have delivered ... ...
https://community.backtrader.com/topic/450/my-different-data-didn-t-start-end-at-the-same-date/1
CC-MAIN-2021-43
refinedweb
633
61.12
Jim Meyering wrote: "Richard W.M. Jones" <rjones redhat com> wrote:There is no uid_t or getuid in MinGW. I'm not really sure that forcing connections readonly if the user is non-root is a useful thing to be doing anyway, so perhaps this code is better off just being deleted?For the missing uid_t, you could add this to configure.in AC_CHECK_TYPE(mode_t, int) then no need for ifndef around the decl of "uid". autoconf docs seem to suggest that this usage is deprecated: <quote> -- Macro: AC_CHECK_TYPE (TYPE, DEFAULT) Autoconf, up to 2.13, used to provide this version of `AC_CHECK_TYPE', deprecated because of its flaws. First, although it is a member of the `CHECK' clan, it does more than just checking. Secondly, missing types are defined using `#define', not `typedef', and this can lead to problems in the case of pointer types. </quote> With this function (and a test for getuid in configure.in), (or maybe that should be "return 0"?) #ifndef HAVE_GETUID static int getuid() { return 1; } #endif /* __MINGW32__ */ you could avoid the remaining ifdefs. Better just to check for getuid?Having said that I still think it'd be better just to delete this code because forcing non-root Xen connections to be readonly doesn't seem very useful
https://www.redhat.com/archives/libvir-list/2007-December/msg00195.html
CC-MAIN-2016-40
refinedweb
214
65.32
Adi Oltean's Weblog - Flashbacks on technology, programming, and other interesting things Let's assume that you want is to write some simple code that writes to a text file. A few assumptions:1) You need avoid corruptions of any kind. 2) Either all of your writes have to make it to the disk, or none of them. 3) The file is updated serially - no concurrent updates from separate processes are allowed. So only one process writes to the file at a time.4) No, you cannot use cool new techologies like TxF. Remember, all you want is just to write to a text file - no fancy code allowed. What are the possible problems? Many people mistakenly think that writing to a file is an atomic operation. In other words, this sequence of function calls is not going to cause garbage in your file. Wrong. Can you guess why? (don't peek ahead for the response). echo This is a string >> TestFile.txt echo This is a string >> TestFile.txt The problem is that the actual write operation is not atomic. A potential problem is when the machine reboots during actual write. Let's assume that your file write is ultimately causing two disk sectors to be overwritten with data. Let's even assume that each of these sectors is part of a different NTFS clusters, and these two clusters are part of the same TestfFile.txt file. The end of the first sector contains the string "This is" and the beginnning of the second sector "a string". What if one of the corresponding hardware write commands to write these sectors is lost, for example due to a machine reboot? You ended up with only one of these sectors overwritten, but not the other. Corruption! Now, when the machine reboots, there will be no recovery at the file contents level. This is by design with NTFS, FAT, and in fact with most file systems, irrespective to the operating systems. The vast majority of file systems do not support atomicity in data updates. (That said, note that NTFS does have recovery at the metadata level - in other words, updates concerning file system metadata are always atomic. The NTFS metadata will not become corrupted during a sudden reboot) The black magic of caching So in conclusion you might end up with the first sector written, but not with the second sector. Even if you are aware of this problem you might still mistakenly think that the first sector is always written before the second one. In other words, assuming that "this is" is always written before "a string" in the code below: using System;using System.IO;class Test { public static void Main() { using (StreamWriter sw = new StreamWriter("TestFile.txt")) { sw.Write("This is"); sw.Write("a string"); } }} using System;using System.IO;class Test { public static void Main() { using (StreamWriter sw = new StreamWriter("TestFile.txt")) { sw.Write("This is"); sw.Write("a string"); } }} This assumption is again wrong. You again can have a rare situation where the machine crashes during your update, and "a string" can end up in the file, but "This is" not saved. Why? One potential explanation is related with the caching activity. Caching happens at various layers in the storage stack. The .NET Framework performs its own caching in the Write method above. This can interfere with your actual intended order of writes. So let's ignore .NET and let's present a second example, this time using pure Win32 APIs: WCHAR wszString1[] = "This is"; WCHAR wszString2[] = "a string"; fSuccess = WriteFile(hTempFile, wszString1, sizeof(WCHAR) * wcslen(wszString1), &dwBytesWritten, NULL); if (!fSuccess) ... fSuccess = WriteFile(hTempFile, wszString2, sizeof(WCHAR) * wcslen(wszString2), &dwBytesWritten, NULL); Again, here you can also have caching at the operating system level, in the Cache Manager, where the file contents can be split across several in-memory data blocks. These blocks are not guaranteed to be written in their natural order. For example, the lazy writer thread (a special thread used by Cache Manager that flushes unused pages to disk) can cause an out-of-order flush. There are other considerations that can cause an out-of-order data flush, but in general you need to be aware that any cache layers in your I/O can cause writes to be randomly reordered. The same reasoning applies to our third example: echo This is >> TestFile.txtecho a string >> TestFile.txt echo This is >> TestFile.txtecho a string >> TestFile.txt Again, you cannot be sure that the file will not end up corrupted - you can have rare scenarios where the resultant file with contain either the word "This" or the word "string" but not both! The solution? One idea is to use special write modes like FILE_FLAG_WRITE_THROUGH or FILE_FLAG_NO_BUFFERING, although in these cases you lose the obvious benefit of caching. You have to pass these flags to CreateFile(). Another idea is to manually flush the file contents through the FlushFileBuffers API. So, how to do atomic writes, then? From the example above, it looks like it is entirely possible that our writes migth complete partially, even if this case is extremely rare. How we can make sure that these writes are remaining atomic? In other words, my write to this file should either result in the entire write being present in the file, or no write should be present at all. Seems like an impossible problem, but that's not the case. The solution? Let's remember that metadata changes are atomic. Rename is such a case. So, we can just perform the write to a temporary file, and after we know that the writes are on the disk (completed and flushed) then we can interchange the old file with the new file. Something like the sequence below (I used generic shell commands like copy/ren/del below but in reality you need to call the equivalent Win32 APIs): Write process (on Foo.txt): - Step W1: Acquire "write lock" on the existing file. (this is usually part of your app semantics, so you might not need any Win32 APIs here)- Step W2: Copy the old file in a new temporary file. (copy Foo.txt Foo.Tmp.txt)- Step W3: Apply the writes to the new file (Foo.Tmp.txt). - Step W4: Flush all the writes (for example those being remaining in the cache manager). - Step W5: Rename the old file in an Alternate form (ren Foo.txt Foo.Alt.txt)- Step W6: Rename the new file into the old file (ren Foo.Tmp.txt Foo.txt)- Step W7: Delete the old Alternate file (del Foo.Alt.txt)- Step W8: Release "write lock" on the existing file. This solution has now another drawback - what if the machine reboots, or your application crashes? You end up either with an additional Tmp or Alt file, or with a missing Foo.txt but with one or two temporary files like Foo.Alt.txt or Foo.Tmp.txt). So you need some sort of recovery process that would transparently "revert" the state of this file to the correct point in time. Here is a potential recovery process: Recovery from a crash during write (on Foo.txt): - Step R1: If Foo.txt is missing but we have both Foo.Alt.txt and Foo.Tmp.txt present, then we crashed between Step W5 and Step W6. Retry from Step W6. - Step R2: If Foo.txt is present but Foo.Tmp.txt is also present, then we crashed before Step W5. Delete the Foo.Tmp.txt file. - Step R3: If Foo.txt is present but Foo.Alt.txt is also present, then we crashed between Step W6 and Step W7. Delete the Foo.Alt.txt file. More and more problems... The sequence of operations above looks good, but we are not done yet. Why? Sometimes shell operations like Delete, Rename can fail for various reasons. For example, it might just happen that an antivirus or content indexing application randomly scans the whole file system once in a while. So, potentially, the file Foo.Tmp.txt will be opened for a short period which will cause either the step W7 or R1..R3 to fail due to the failed delete. And, not only that, but also Rename can fail if the old file already exists, and someone has an open handle on it. So even the steps W2 or W5 can fail too... The fix would be to always use unique temporary file names. In addition, during the recovery process, we will want to clean up all the "garbage" from previous temporary file leftovers. So, instead of files like Foo.Tmp.txt or Foo.Alt.txt, we should use Foo.TmpNNNN.txt and Foo.AltNNNN.txt, together with a smart algorithm to clean up the remaining "garbage" during recovery. Here is the overall algorithm: Write process (on Foo.txt): - Step W1: Acquire "write lock" on the existing file. - Step W2: Copy the old file in a new unique temporary file. (copy Foo.txt Foo.TmpNNNN.txt)- Step W3: Apply the writes to the new file (Foo.TmpNNNN.txt). - Step W4: Flush all the writes (for example those being remaining in the cache manager). - Step W5: Rename the old file in a new unique Alternate form (ren Foo.txt Foo.AltNNNN.txt)- Step W6: Rename the new file into the old file (ren Foo.TmpNNNN.txt Foo.txt)- Step W7: Delete the old Alternate file (del Foo.AltNNNN.txt). If this fails, simply ignore. The file will be deleted later during the next recovery. - Step W8: Release "write lock" on the existing file. Recovery from a crash during write (on Foo.txt): - Step R1: If Foo.txt is missing but we have both Foo.AltNNNN.txt and Foo.TmpNNNN.txt present, then we crashed between Step W5 and Step W6. Retry from Step W6. - Step R2: If Foo.txt is present but Foo.TmpNNNN.txt is also present, then we crashed before Step W5. Try to delete all Foo.TmpNNNN.txt files and ignore failures. - Step R3: If Foo.txt is present but Foo.AltNNNN.txt is also present, then we crashed between Step W6 and Step W7. Try to delete all Foo.AltNNNN.txt files and ignore failures. That's it! PingBack from PingBack from PingBack from
http://blogs.msdn.com/adioltean/archive/2005/12/28/507866.aspx
crawl-002
refinedweb
1,711
76.32
I am trying to create a workflow with firework specs that depends on user parameters and executes fireworks-unaware python functions. What is different about my use case compared to the examples is that instead of filling input files, I want to fill in yaml files that are then used to construct the rest of the workflow. The sample below is a toy illustration: template_task.yaml fills the template and the filled.yaml represents the final workflow. If I do it in two steps (add-launch-add-launch) the example is trivial. What I would like is to make this a single dynamic workflow, so that after filling in the context, the yaml workflow is parsed and added to the workflow. I imagine that this would require that I subclass TemplateWriterTask, intercept the FWAction and add the new firework or workflow using the "additions’ field. Can someone tell me the correct way to construct the new fireworks or workflow from my filled yaml file? Is it OK if I just intercept the FWAction of the parent class and set additions field? Can I parse multiple fireworks from one filled yaml file and express links between them? Could the template be a template for a workflow? If it isn’t feasible, I’ll guess I’ll write python wrapper around fireworks that does the add-launch-add-launch sequence, but I thought this might be an interesting capability. - Thanks ===== file template.yaml spec: _tasks: - _fw_name: PyTask func: simple.echo args: - {{arg1}} ==== file template_task.yaml spec: _tasks: - _fw_name: TemplateWriterTask context: arg1: hello output_file: filled.yaml template_file: template.yaml ===== file simple.py def echo(arg1): print(“Arg is {}”.format(arg1)) ``
https://matsci.org/t/using-templatewritertask-to-write-yaml-for-another-firework/3152
CC-MAIN-2020-40
refinedweb
277
66.94
A few weeks ago, I posted some info on embedding SWF-based assets in AS3. One question that came up in the comments was how to embed a symbol from a Flash 9 SWF that has a custom class associated with it, and use it as an instance of that custom class. In other words,, which “is a subclass of the Flash Player’s Sprite class which represents vector graphic images that you embed in a Flex application.” Note: “represents vector graphic images”. Even when I had a real Star class, using describeType, getQualifiedClassName, and getQualifiedSuperclassName show no sign of that class coming though with the embedded object. Any code that should execute from that class does not execute. In fact, even if you put some code on the timeline of that symbol, say drawing some random lines. It executes fine when compiled in the IDE, but that code will not run in the embedded symbol, pointing up the fact that ALL that is embedded is the vector graphic info. So, unless I’m missing something, that’s that. The one thing that is kind of odd though, is that if you publish a Flash 8 SWF and then embed an asset from it that has some code in it, it gives you a warning that the AS2 code will be ignored. If all code is discarded anyway, I’m not sure why that warning even exists. I ran into a similar problem, and decided to use a Loader to load the resource swf into the current appdom at runtime and to use ApplicationDomain.currentDomain.getDefinition to fetch the class. I assume this grabs resource AND code (but haven’t tested). Alternatively, the include-library compiler option might provide a means of doing this, but I don’t have a set-up here at work to test. … but with include-library, you’d still need to use getDefinition yeah. loading an asset at run time does include the code. The way I am doing it is as follows: First of all I have to note that I am loading my assets at runtime which in my application gives me more flexibility: I have an AssetLoader Class impelemented as a Singelton that extends EventDispatcher with a: private var ldr:Loader; and removing listeners, dispatchers and other utilities the relevant functions are: //Loads the library and sets the application domain public function loadLibrary($url:String):void{ swfLib = $url; var req:URLRequest = new URLRequest($url); var context:LoaderContext = new LoaderContext(); context.applicationDomain = ApplicationDomain.currentDomain; ldr.load(req, context); } //Gets the corresponding class name public function getClass(className:String):Class { try { return ldr.contentLoaderInfo.applicationDomain.getDefinition(className) as Class; } catch(e:Error) { throw new IllegalOperationError(className + ” definition not found in ” + swfLib); } return null; } So when I want to instance a class, for example “Triangle” from the asset library that I generated in Flash 9 I just call the following: var Triangle:Class = AssetLoader.getInstance().getClass(“Triangle”); wing = new Triangle(); The important part is seting the LoaderContext to ApplicationDomain.currentDomain which is a really cool concept. I know the answer! Let’s say you have a custom class, Boogie extends MovieClip, and its all compiled up into a nice flash 9 SWF called Boogie.swf. In your flex app, use: [Embed(source=”Boogie.swf”, mimeType=”application/octet-stream”)] private var BOOGIE_CLASS :Class; Now, BOOGIE_CLASS will be a class that extends ByteArray. (Not MovieClip!) To reconstitute a Boogie instance, use: var boogieBytes :ByteArray = (new BOOGIE_CLASS() as ByteArray); var l :Loader = new Loader(); l.loadBytes(boogieBytes, new LoaderContext(false, ApplicationDomain.currentDomain)); // omitted for brevity: listening to the loader, waiting for COMPLETE var something :DisplayObject = l.content; trace(“something is a ” + something); // should output that it’s a [Object Boogie] […] I’m still scratching my head over when you would use a loader class versus not use it. Via a simple embed, my objects are compiled in and I don’t need the external library to be present. By contrast, if I use a loader class, objects are loaded at runtime. In AS3 you can manipulate the imported object’s DisplayObject properties. So perhaps you have more available to you… well, at least with a Flash 9 object (see comments here). But what’s eminently clear is that, if you want to communicate between an AVM1 SWF and an AVM2 SWF, you need to use LocalConnection. […]. Interesting. I’ll have to try that out Mark. Well, my current project requires my my main swf to be able to communicate with loaded swfs and other swfs on the same system (in a web page). To do this, I am currently using LocalConnection to invoke methods by name. I had thought about implementing a type library scheme in which each swf would have a function that sends an array of custom function definition objects to the caller, but that isn’t really necessary for this project since all I really need to do is command an swf (via LocalConnection)to do this or that, and if it needs to return a value it does so via LocalConnection. However, I figure that if you REALLY want to be slick, declare some common interfaces in all swfs, and implement them as necessary. Then, when you get a class by using getDefinition(…), you can cast it to that interface and invoke methods that way. That is really the OOP way of doing things anyway. I’m probably gonna have to do a re-write and move on into interface-implementing classes. Hey Keith. It’s about a year and a half after this article was posted. I’d be pretty interested in what your current preferred way of working is with the embedding issue. I’m running into a lot of casting issues with the Embed-statement on member variables. Mark Walters’ suggestion works a lot better in that division, but I don’t like the way the Embed-statements are scattered throughout the entire project like that. Would you like to share some insights about this? Cheers, Eric-Paul Has anyone run into the problem with casting a class loaded through a external swf to an interface that is defined in both the loaded and loading class? as in: AssetLoader.getInstance().getClass(â€Triangleâ€); wing = new Triangle(); var ims: IMyShapes = wing as IMyShapes ims is always null!!? […] Peters had a couple of posts a little while ago about embedding assets in as3 (1 and 2). One thing that came up in both of them that could not be resolved was how to associate a custom […] Mark, I tried your idea above and I get the folllowing error message while compiling: 1131: Classes must not be nested. Here is the code: [Embed(source=”components/externalAsset.swf”)] public class ExtenalAsset extends MovieClip; Mark Walters answer only works when you want the ActionScript code within a completely separate class from the embedded asset, because with his approach the ActionScript code is still not embedded. It is rather an extension of the symbol (class) you’re embedding (with no ActionScript code) and you add the code within the project that uses the asset. Not the typical approach if you ask me. I think Ray Greenwell (thanks a lot) has the correct answer. We got it working as follow: ========================= [Embed(source="assets.swf", mimeType="application/octet-stream")] public var Assets:Class; public var assetLoader:Loader; public function init():void { assetLoader = new Loader(); assetLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, onAssetLoaderComplete); assetLoader.loadBytes(new Assets as ByteArray, new LoaderContext(false, ApplicationDomain.currentDomain)); } public function onAssetLoaderComplete(event:Event):void { // Get symbol var Symbol:Class = getDefinitionByName("symbolName"); // Instantiate symbol var symbolInstance:* = new Symbol(); // Execute method within symbol symbolInstance.myMethod("arg1", "arg2"); } ========================= I’m getting some strange behavior from my embedded objects that I thought I’d share. In my library I’ve got symbols linked to Classes that embed themselves like so: (In the Flash CS3 IDE there is a symbol “Baddie” linked to the enemies.Baddie class) package enemies { [Embed(source='../../flas/library.swf', symbol='enemies.Baddie')] public class Baddie extends Enemy { ... I have another symbol in my library called Level and it contains multiple instances of the Baddie symbol. At runtime I parse the Level Symbol to find all of it’s children: public function parseLevel (container:DisplayObjectContainer) : void { var child:DisplayObject; for (var i:uint=0; i < container.numChildren; i++) { child = container.getChildAt(i); trace(child); if (child is Baddie) { enemyList.push(child); } } } This parses the Level symbol fine in the Flash CS3 IDE, but when I move over to FlashDevelop 3 RC2 and compile, the “Baddie” is now merely a MovieClip and will not be caught by the parseLevel() function. However, If I add a dummy Baddie object to the Level Class all instances of Baddie will be caught by if(child is Baddie)! in the Level Class: public var bad:Baddie;
http://www.bit-101.com/blog/?p=864
CC-MAIN-2017-17
refinedweb
1,474
53
19 March 2010 03:22 [Source: ICIS news] GUANGZHOU (ICIS news)--Sinopec's subsidiary Guangzhou Petrochemical is running its 220,000 tonne/year cracker at Guangzhou in the southern province of Guangdong normally, after a fire at the facility shut its 8m tonne/year refinery, a company source said on Friday. The fire, which started at around 15:30 hours local time (07:30 GMT) on Thursday due to a leakage from a wax oil pump was brought under control at around 21:00 hours the same day. No injuries were reported. “It is not known when the refinery will be restarted. If it takes too long, say more than one week to fix, the cracker could also be shut to wait for naphtha feedstock,” the source added. The company also has a ?xml:namespace> Li Li from CBI.
http://www.icis.com/Articles/2010/03/19/9344087/sinopec-guangzhou-pc-cracker-ops-normal-after-fire-refinery-shut.html
CC-MAIN-2013-20
refinedweb
139
78.18
More Articles Excerpt from Professional ADO.NET 2: Programming with SQL Server 2005, Oracle, and MySQL Autumn 2005 marked the arrival of the .NET Framework 2.0. But never fear — this release of ADO.NET won't force you to uproot existing code and rewrite it in the new format. Your old code should work under the new Framework (unless you're still using any of the bugs they've fixed — but you wouldn't do that . . . would you?). The new ADO.NET API is the same as before; it hasn't been turned upside down. In fact, the existing API has been carefully extended so that the code and applications you've written in 1.0 or 1.1 should continue to work without any change. All of the features introduced in ADO.NET 2.0 can be used incrementally. In other words, if you want to use one of the new features, all you need to do is add on to your existing code; you don't have to switch to a whole new API model. In general, you can preserve your existing code base, only adding the things you need for one particular feature in one part of your application. We're telling the truth — the API still works! With every release of .NET, Microsoft has a bit of a spring clean, and this release is no different. They've killed off a few of the methods, types, and properties. What we said earlier still applies, however: Any old code you have with methods or types that have been deprecated or made obsolete will continue to run under ADO.NET 2.0, even if it has references to APIs that have been removed in .NET Framework 2.0. There is one catch: You can't recompile those applications with obsolete APIs under .NET 2.0 with the references remaining. For example, take something simple, such as this Console application, which was originally built in .NET 1.0 under Visual Studio 2002. It instantiates the System.Data.OleDb.OleDbPermission class with a blank constructor, which was made obsolete in .NET 1.1 and remains obsolete in .NET 2.0: System.Data.OleDb.OleDbPermission namespace DotNetHosting { Sub UseObcoleteClass Dim ODP As New System.Data.OleDb.OleDbPermission End Sub } The preceding code compiled (and still compiles) in .NET 1.0 without any problems. If, however, you attempt to compile the same code under .NET 1.1 or 2.0, the compiler will not be particularly nice to you, presenting you with a rather colorful compilation error: 'Public Sub New()' is obsolete: 'OleDbPermission() has been deprecated. Use the OleDbPermission(PermissionState.None) constructor. When you run into that kind of exception and you want to compile your application under that version of the Framework, you must change your code to bypass the compiler errors. In the case of the preceding example, you can see that the compiler error that was thrown also describes the fix you should perform. This issue only exists, however, if you wish to recompile your applications. You don't need to recompile them, of course, just to have them run under a new version of the Framework. In fact, if you've already installed .NET 2.0, it's likely that many of your .NET applications are already running under it. (You can confirm this by checking the value of System.Environment.Version.ToString(). It will tell you the version of the Framework under which your applications are running.) System.Environment.Version.ToString() As long as you don't recompile your applications, they will continue to work fine under any version of the Framework. You can force an application to run under a specific version of the Framework very easily with the addition of an entry to the application's configuration file (app.config/web.config) that defines the version of the Framework the application is to run under: <startup> <supportedRuntime version="v1.1.4322" /> </startup> In short, you don't need to recompile your existing applications to take advantage of the 2.0 release of the .NET Framework. In fact, you're probably already running existing applications that were developed in .NET 1.0 and 1.1 under .NET 2.0. Moreover, if you need to recompile your existing applications in .NET 2.0, you'll have to clean up anything that has been removed from the Framework. Try your applications under .NET 2.0. You might find they work flawlessly and that you can take complete advantage of the performance increases in both ADO.NET 2.0 and the Framework in general at no cost. We can't guarantee your code will work. Microsoft says it should, but of course, we all know their track record on that point — it means they may be writing some future notes of their own. As well as dealing with types and methods that have been removed in .NET 2.0 or previous incarnations of the Framework, .NET 2.0 introduces changes of its own, marking many types and methods as obsolete — in other words, they won't work in future versions of the Framework. In the past, Microsoft has dealt harshly with the deprecation of members and types. In the transition between .NET 1.0 and .NET 1.1, types and members marked as obsolete would not compile under 1.1. With .NET 2.0, types and methods that have the mark of death placed on them by the Microsoft Grim Reaper are not being blocked outright. Rather, the compiler will provide warnings, informing developers of the API's impending death. What this means for you, the developer, is that you can continue to use the APIs that have been placed on death row. However, you already know that the code won't compile in .NET 2.0, so be forewarned. A full list of all the changes between all versions of the .NET Framework can be found at. As an example of this deprecation and warning system, take a look at the following code, which uses the SqlParameterCollection.Add(string,string) method signature that has been marked as obsolete in .NET 2.0: SqlParameterCollection.Add(string,string) Sub SqlSqlCommandAddParameter() Dim SqlComm As New System.Data.SqlClient.SqlCommand SqlComm.Parameters.Add("@Socks", "Smelly"); End Sub By default, the code will compile and run without any issues under .NET 2.0, but the compiler will output a warning that indicates the method signature has been marked as obsolete. The warning looks like this: 'Public Function Add(parameterName As String, value As Object) As System.Data.SqlClient.SqlParameter' is obsolete: 'Add(String parameterName, Object value) has been deprecated. Use AddWithValue(String parameterName, Object value). Think of the warning as a death knell ringing on the APIs that have been marked as obsolete. To be completely accurate, the preceding code may or may not compile, depending on the settings of your build environment and whether warnings are treated as errors. If it doesn't compile cleanly, you'll need to change the code to use new or alternative methods suggested by the compiler in the error message. If you find yourself receiving compiler warnings, change your code. It's not worth the hassle down the line after you've forgotten all about the code and then find yourself needing to change its functionality or fix a bug (not that our code ever has any. . .), or discovering that it won't even compile on future versions of .NET. If you don't know what the Generic Factory Model is and you develop against different database servers, then you're in for a real treat. Microsoft has outdone themselves with ADO.NET 2.0 and come to the rescue of all multiplatform database developers. One day you might be developing against a SQL server; the next you might be developing against an Oracle server. It's possible down the line you'll be asked to develop against a fridge freezer. Whatever your data source, ADO.NET 2.0 gives you a provider-agnostic platform on which to build your applications, meaning you can write your code once and have it work on any data source you wish. The Generic Factory Model is an architecture that enables access to any database, from one set of code. ADO.NET 2.0 has that architecture plumbed right into the Framework, so you can use it too. Inside the System.Data.Common namespace are some lovely new classes that enable us to make platform-independent code very easily, but before we get our hands dirty, we'll quickly run through the Generic Factory Model. System.Data.Common During the Dark Ages (when our only Framework was .NET 1.0), there were three providers in the form of the following namespaces: In those days, we programmers were encouraged by samples all across the Internet, in books, and by our peers to directly use the most appropriate set of classes from the correct namespace. Doing this was problematic, however, because after a specific provider such as SqlClient was hard-coded into the application, the code could no longer be used to look at an Oracle database server using the OracleClient provider. In other words, we were locked into a single provider, and when our bed was made — as the saying goes — we had to lie in it. If you wanted to write platform-agnostic code in the olden days (nearly three long years ago), you'd have to use a bit of black magic, interfaces, and a switch statement: Public ReadOnly Property Connection() As System.Data.IDbConnection Get Select Case OldGenericFactoryHelper.Provider Case "SqlClient" Return New System.Data.SqlClient.SqlConnection Case "Odbc" Return New System.Data.Odbc.OdbcConnection Case "SqlClient" Return New System.Data.OleDb.OleDbConnection Case Else Return Nothing End Select End Get End Property As you can see, the method returns an interface of type IDbConnection, which is a generic implementation of the Connection class that all provider-specific classes implement (SqlConnection, OdbcConnection, and so on). This approach enabled you to code against the interfaces, rather than the specific providers, but it always felt a little dirty. IDbConnection Connection SqlConnection OdbcConnection Any application employing this approach had a design that was completely platform-independent. The data access architecture is shown in Figure 1. One of the main problems with this model was that each time a provider was added to the system, the switch statement had to be altered. The fun didn't stop there, though. You also needed switch statements for all of the other provider-specific classes, such as those that implement IDbCommand, so that your applications could retrieve the right Command class (SqlCommand, OleDbCommand, OdbcCommand, and so on). IDbCommand Command SqlCommand OleDbCommand OdbcCommand Although this wasn't a massive problem, and the approach generally worked well, the ADO.NET 2.0 team at Microsoft came up with a much better solution, called the Generic Factory Model. ADO.NET 2.0 solves the aforementioned problem by introducing Factories into the Framework. Just like a real factory, a Factory takes in raw materials and produces fully working products. In this case, the raw materials are the providers we want to use, and the products are the provider-independent classes we need. The provider-independent classes include DbConnection, DbCommand, and DbParameter, as well as a whole host of other classes. The way they are used is very similar to the way they were used in the old model, but they come from a Factory built into the Framework — you don't have to write the code yourself. DbConnection DbCommand DbParameter The new architecture is shown in Figure 2. In other words, you no longer have to code any switch statements. Better yet, the .NET Framework will do all the hard work for you. For example, if you want a SqlClient connection object, all you need is the following: Public Shared Function GetConnection(ByVal providerName As String) Return System.Data.Common.DbProviderFactories. GetFactory(providerName).CreateConnection() End Function If you want a command object, then it's as simple as calling CreateCommand() on the Factory returned by GetFactory(). This code is much cleaner and self-maintaining. You never have to modify it, even to use new providers — they just appear to your code automatically. CreateCommand() GetFactory() See how all of this makes life easier? We're speaking one language only, that of the provider-agnostic class implementations. Now let's go into a little more detail regarding the real-world implementation of ADO.NET 2.0 Provider Factories. The Factory is the key to the Generic Factory Model; without it, there would be no model. It's the creator of all of your classes, and it's the place where all of the real work happens. The .NET Framework is shipped with a bundle of Factory implementations in the box. They're defined in the machine.config file inside the Framework folders. You can access the machine.config file and take a look for yourself: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50215\CONFIG\machine.config There are five providers out of the box, but as with everything else in the .NET Framework, this is customizable — you can easily add your own providers to extend the list on any machine with the Framework installed. The five built-in providers are listed in the following table. As mentioned previously, it's very easy to add your own providers. There are various places where you can define a provider so it can be used, including the machine.config, app.config, and web.config files. Just add a small section to one of the configuration files (and have the provider installed on the machines in question!): " /> </DbProviderFactories> Extensibility is the operative word here. The Generic Factory Model is easy to use, easy to implement, and very easy to justify. Decisions, decisions. The new features thrown at the feet of .NET developers in ADO.NET 2.0 are extensive and formidable, so in order to help you decide if the shift is worth it, we'll briefly run through the pros and cons of the Generic Factory Model (in comparison to directly accessing a provider such as SqlClient). Here are some reasons to use the Generic Factory Model: Of course, there are always some disadvantages too: SqlClient Applications can be weighed against one another in terms of maintainability, security, and performance. For some people, maintainable code is more important than secure code, whereas for others, security is the primary concern. For the majority of developers, however, performance is the only thing they need to worry about — in other words, the only thing the boss will notice.
http://www.wrox.com/WileyCDA/Section/id-290415.html
CC-MAIN-2018-30
refinedweb
2,442
56.66
Re: Benefits of upgrading to Excel2003 From: Norman Harker (njharker_at_optusnet.com.au) Date: 03/25/04 - Next message: Mark Graesser: "RE: Percentages" - Previous message: Dennis: "? Object, Method, Property to evaluate cell contents" - In reply to: Davis: "Benefits of upgrading to Excel2003" - Messages sorted by: [ date ] [ thread ] Date: Fri, 26 Mar 2004 01:03:32 +1100 Hi Davis! You probably don't have access to Excel 2002 and excel 2003 so I've copied and pasted what Microsoft say about the new features in those two editions. Here's What's new in Excel 2002 (Followed by What's new in 2003) Key new features in Microsoft Excel Importing data: Querying data from Web pages Now it's even easier to get refreshable data from the Web into Excel for viewing and analysis. Use the new browser-like interface to visually select tables on Web pages for import into Excel, or copy data from a Web page and create a refreshable query. Web queries included with Excel provide fast, accurate information such as stock quotes for your spreadsheets. You can also create Web queries to Extensible Markup Language (XML) files. Importing data Getting data where you need it and when you need it is as easy as choosing an option and finding your data source in the Select Data Source dialog box. If you want to import data from a remote data source, use the Data Connection Wizard to locate and import data from sources as varied as Microsoft SQL Server, Open Database Connectivity (ODBC), and Online Analytical Processing (OLAP) data sources. Microsoft Office data connection (.odc) files are shareable. Formulas and functions ScreenTips for function arguments Type a function in a cell and a convenient screen tip appears showing all of the arguments for the function, along with a link to the Help topic for the function. Recommended functions in the Function Wizard Type a natural language query, such as "How do I determine the monthly payment for a car loan", and the Function Wizard returns a list of recommended functions you can use to accomplish your task. Cut-and-paste function reference examples If you've wondered how to translate Help examples into meaningful work*** data, you'll find the cut-and-paste function examples in Excel Help useful and time saving. Task-based formula help Real-life examples for real-life numeric problems make powerful additions to the Help you've come to count on. Watch window Keep track of cells and their formulas on the Watch Window toolbar, even when the cells are out of view. This moveable toolbar tracks cell properties including workbook, work***, name, cell, value, and formula. Expanded AutoSum functionality The practical functionality of AutoSum has expanded to include a drop-down list of the most common functions. For example, you can click Average from the list to calculate the average of a selected range, or connect to the Function Wizard for more options. Formula evaluator You can see the various parts of a nested formula evaluated in the order the formula is calculated by using a simple dialog box on the Formula Auditing toolbar. Trace precedent and dependent cells with Formula Auditing Use the Formula Auditing toolbar to graphically display, or trace, the relationships between cells and formulas with blue arrows. You can trace the precedent cells or the dependent cells. Formula error checking Like a grammar checker, Excel uses certain rules to check for problems in formulas. These rules can help find common mistakes. You can turn these rules on or off individually. Work*** and workbook formatting Color-coded work*** tabs Organize your work by adding color to your work*** tabs. Control automatic changes with smart tags Buttons that appear automatically on your work*** can help you with tasks such as automatic correction options, paste options, automatic fill options, insert options, and formula error checking. With a click of a button you can choose from options related to your task without leaving the work*** or the cells you're working on. Unmerge on the toolbar No more searching for a way to unmerge cells. Now unmerge is conveniently located on the Format toolbar. Retain column widths If you have worksheets with specified column widths, now you can paste information from another work*** without losing that formatting by clicking the Paste Options button, and then clicking Keep Source Column Widths. Border drawing A new border drawing tool allows you to outline complex borders with little effort. More new features in Excel Everyday tasks Find and replace Finding and replacing data in Excel includes great new options to match formats and search an entire workbook or worksheets. Links management Changes to the Edit Links dialog box allow you to check the status of the links in your work*** and make changes. A new workbook option allows you to control whether to update links in your workbook automatically. Hyperlink navigation Selecting a cell with a hyperlink is improved. Click the hyperlink once to follow it. Click and hold to select the cell. Sending a range Sending out mid-month reports and summaries just got easier. Select a range on your work***, click E-mail on the Standard toolbar, type an introduction to the report, and then send it without spending extra time on the task. Insert and delete columns while filtering You can insert and delete columns with AutoFilter turned on in Excel. You can even undo the action and preserve any applied filtering. Speech playback An option to have a computer voice play back data after every cell entry or after a range of cells has been entered makes verifying data entry convenient and practical. You can even choose the voice the computer uses to read back your data. This feature is available in Chinese, Japanese, and English (U.S.) only. Printing You've asked for the ability to insert graphics and file names in headers and footers, and with Excel you can. You'll also find a handy A4 paper resizing option on the International tab under Tools menu, Options command, which will scale the work*** you formatted for A4 paper if you have letter-size paper in your printer. Smart tags By turning on smart tags, you can type a U.S. financial symbol and use Smart Tag Actions to insert a stock quote in your work***, find out more about the company you're doing business with, and more. You can also type the name of someone you've recently sent an e-mail message to into a cell, and then use smart tag options to schedule a meeting or add the name to your contacts list, all without leaving your work***. Item properties in PivotTables Online Analytical Processing (OLAP) is a powerful tool for aggregating numeric information, and now you can annotate your data with item properties to make your data warehouse even more valuable. AutoRepublish Anyone who frequently publishes Excel data to the Web will appreciate additional Web publishing features that allow you to automatically republish items to Web pages whenever you save a workbook with previously published items. Open and save XML With Excel, you can open and save Extensible Markup Language (XML) files, save entire workbooks in the XML Spread*** format, and create queries to XML source data. Digital Dashboard and Web Parts Use Excel to create Web Parts to include on your company's new Digital Dashboard. For example, you might create an updateable sales chart to highlight your division's contributions to the company's bottom line. Work*** protection Excel adds power and flexibility to protect your data from changes to worksheets and cells. You can protect cell values and formulas, and allow the cell to be formatted. You can also ensure that only specific users are allowed to change cells.. Multilingual editing With Excel you can edit spreadsheets in any language, including right-to-left language editing in Arabic and Hebrew. Excel automatically links fonts so you don't have to figure out the language of a particular font. IME support If you have an Input Method Editor (IME) installed, you can edit Asian language spreadsheets in any language version of Excel. Worldwide number formats You can format numbers for a specific location in all language versions of Excel. New Microsoft Office features Everyday tasks Office task panes The most common tasks in Office are now organized in panes that display in place with your Office document. Continue working while you search for a file using the Search task pane, pick from a gallery of items to paste in the Office Clipboard task pane, and quickly create new documents or open files using the New File task pane that appears when you start an Office program. Other task panes vary in each Office program. New look Microsoft Office XP has a cleaner, simpler look to its interface. Softer colors also contribute to this updated feel. More convenient access to Help Get the full power of the Answer Wizard in an unobtrusive package. When you enter a question about an Office program in the Ask a Question box on the menu bar, you can see a list of choices and read a Help topic whether you are running the Office Assistant or not. in each Office program. Updated Clip Organizer Hundreds of new clips, an easy task pane interface, as well as the same abilities to organize clips and find new digital art on the Web are part of the updated Clip Organizer (formerly Clip Gallery). Conceptual diagrams Word, Excel, and PowerPoint include a new gallery of conceptual diagrams. Choose from Word and Microsoft Outlook, you can also choose to leave text in handwritten form. Improved fidelity of pictures and drawings In Office XP, Word, Excel, PowerPoint, Microsoft FrontPage, and Microsoft Publisher are using File Open, File New, and File Save dialog boxes, as you would any other Office document. Web documents and Web sites Target your Web publishing efforts Save your've saved as a Web page in the program it was created in, right from Microsoft Internet Explorer. Error prevention and recovery Document recovery and safer shutdown Documents you are working on can be recovered if the Office program encounters an error or stops responding. The documents are displayed in the Document Recovery task pane the next time you open the program. Office Safe Mode Microsoft Office XP programs can detect and isolate startup problems. You can bypass the problem, run your Office program in safe mode, and keep doing your work. Office crash reporting tool Diagnostic information about program crashes can be collected and sent to your company's information technology department or to Microsoft, allowing Product Support Services (PSS) experts to correct these problems so they don't interrupt you again. Security Digital signatures You can apply a digital signature to Microsoft Word, Excel, and Microsoft PowerPoint files to confirm that the file has not been altered. Increased protection against macro viruses Network administrators can remove Microsoft Visual Basic for Applications, the programming language of Microsoft Office, when deploying Office. This can decrease the possibility of viruses spreading via Office documents.. MUI Pack and a volume licensing agreement. Hangul/Hanja converter improvements Over 20,000 new characters are supported by this converter for Korean language documents. The converter automatically uses new fonts that have the proper glyphs for the new characters. Full support for Windows 2000 language features enter characters from East Asian languages in all Office programs, even if your system software is a non-East Asian language version. (This was previously only supported in Microsoft Word and Microsoft Outlook, or when running Windows 2000.) For example, on a computer running English (U.S.) Microsoft Windows 98, you can enter Japanese characters in Excel. What's new in Excel 2003: List functionality: In Microsoft Office Excel 2003, you can create lists in your work*** to group and act upon related data. You can create a list on existing data or create a list from an empty range. When you specify a range as a list, you can easily manage and analyze the data independent of other data outside of the list. Additionally, information contained within a list can be shared with others through integration with Microsoft Windows SharePoint Services. A new user interface and a corresponding set of functionality are exposed for ranges that are designated as a list. Every column in the list has AutoFilter enabled by default in the header row which allows you to quickly filter or sort your data. The dark blue list border clearly outlines the range of cells that compose your list. The row in the list frame that contains an asterisk is called the insert row. Typing information in this row will automatically add data to the list. A total row can be added to your list. When you click on a cell within the total row, you can pick from a drop-down list of aggregate functions. You can modify the size of your list by dragging the resize handle found on the bottom right corner of the list border. Integration with Windows SharePoint Services Excel lists allow you to collaborate the information contained within a list with seamless integration with Windows SharePoint Services. You can create a SharePoint list based on your Excel list on a SharePoint site by publishing the list. If you choose to link the list to the SharePoint site, any changes you make to the list in Excel will be reflected on the SharePoint site when you synchronize the list. You can also use Excel to edit existing Windows SharePoint Services lists. You can modify the list offline and then synchronize your changes later to update the SharePoint list. Improved statistical functions Aspects of the following statistical functions, including rounding results, and precision have been enhanced: ZTEST Note The result of the preceding functions may be different than in previous versions of Microsoft Excel. XML support: Industry-standard XML support in Microsoft Office Word 2003, Microsoft Office Excel 2003, and Microsoft Office Access 2003 streamlines the process of accessing and capturing information between PCs and back-end systems, unlocking information, and allowing for the creation of integrated business solutions across the organization and between business partners. With XML support in Excel, your data can be exposed to external processes, in a business-centric XML vocabulary. XML enables you to organize and work with workbooks and data in ways that were previously impossible or very difficult. By using your XML schemas , you can now identify and extract specific pieces of business data from ordinary business documents. You can attach a custom XML schema to any workbook. Then, you use the XML Source task pane to map cells to elements of the schema. Once you have mapped the XML elements to your work***, you can seamlessly import and export XML data into and out of the mapped cells.. For example, your company may have a process for filling out annual employee expense forms, and you may already use a Microsoft Office Excel 2003 template for this purpose. If that template is turned into a smart document, it can be connected to a database that automatically fills in some of the required information, such as your name, employee number, manager's name, and so on. When you complete the expense report, the smart document can display a button that allows you to send it on to the next step in the process. Because the smart document knows who your manager is, it can automatically route itself to that person. And, no matter who has it, the smart document knows where it is in the expense review process and what needs to happen next. Smart documents can help you reuse existing content. For example, accountants can use existing boilerplate when creating billing statements. Smart documents can make it easier to share information. They can interact with a variety of databases and use BizTalk for tracking workflow. They can even interact with other Microsoft Office applications. For example, you can use smart documents to send e-mail messages through Microsoft Outlook, all without leaving the workbook or starting Outlook. Document Workspaces: Use Document Workspaces to simplify the process of co-authoring, editing, and reviewing documents with others in real-time through Microsoft Office Word 2003, Microsoft Office Excel 2003, Microsoft Office PowerPoint 2003, or Microsoft Office Visio 2003. A Document Workspace site is a Microsoft Windows SharePoint Services site that is centered around one or more documents. People can easily work together on the document- either by working directly on the Document Workspace copy or by working on their own copy, which they can update periodically with changes that have been saved to the copy on the Document Workspace site. Typically, you create a Document Workspace when you use e-mail to send a document as a shared attachment. As the sender of the shared attachment, you become the administrator of the Document Workspace, and all the recipients become members of the Document Workspace, where they are granted permission to contribute to the site. Another common way to create a Document Workspace is to use the Shared Workspace task pane (Tools menu) in a Microsoft Office 2003 program. When you use Word, Excel, PowerPoint, or Visio to open a local copy of the document on which the Document Workspace is based, the Office program periodically gets updates from the Document Workspace and makes them available to you. If the changes to the workspace copy conflict with changes you've made to your copy, you can choose which copy to keep. When you are finished editing your copy, you can save your changes to the Document Workspace, where they are available for other members to incorporate into their copy of the document. Information Rights Management:. Authors use the Permission dialog box (File | Permission | Do Not Distribute or Permission on the Standard toolbar) to give users Read and Change access, as well as to set expiration dates for content.. Users who receive. Note You can create content with restricted permission using Information Rights Management only in Microsoft Office Professional Edition 2003, Microsoft Office Word 2003, Microsoft Office Excel 2003, and Microsoft Office PowerPoint 2003. Compare workbooks side by side: Using one workbook to view changes made by multiple users can be difficult, but a new approach to comparing workbooks is now available- comparing workbooks. More new features: New look for Office Microsoft Office 2003 has a new look that's open and energetic. Additionally, new and improved task panes are available to you. New task panes include Getting Started, Help, Search Results, Shared Workspace, Document Updates, and Research. Tablet PC support On a Tablet PC, you can quickly provide input using your own handwriting directly into Office documents as you would using a pen and a printout. Additionally, you can now view task panes horizontally to help you do your work on the Tablet PC the way you want to do your work. Research task pane The new Research task pane offers a wide variety of reference information and expanded resources if you have an Internet connection. You can conduct research on topics using an encyclopedia, Web search, or by accessing third-party content. Microsoft Office Online Microsoft Office Online is better integrated in all Microsoft Office programs so that you can take full advantage of what the site has to offer while you work. You can visit Microsoft Office Online directly from within your Web browser or use the links provided in various task panes and menus in your Office program to access articles, tips, clip art, templates, online training, downloads, and services to enhance how you work with Office programs. The site is updated regularly with new content based on direct feedback and specific requests from you and others who use Office. Improving quality for the customer Microsoft strives to improve quality, reliability, and performance of Microsoft software and services. The Customer Experience Improvement Program allows Microsoft to collect information about your hardware configuration and how you use Microsoft Office programs and services to identify trends and usage patterns. Participation is optional, and data collection is completely anonymous. Additionally, error reporting and error messages have been improved so that you are provided with the easiest approach to reporting errors and the most helpful information about alerts at the time you encounter a problem. Finally, with an Internet connection, you can give Microsoft customer feedback about an Office program, help content, or Microsoft Office Online content. Microsoft is continually adding and improving content based on your feedback. -- Regards Norman Harker MVP (Excel) Sydney, Australia njharker@optusnet.com.au Excel and Word Function Lists (Classifications, Syntax and Arguments) available free to good homes. "Davis" <anonymous@discussions.microsoft.com> wrote in message news:1328501c4125a$701cf8c0$a501280a@phx.gbl... > Are there major benefits upgrading from Excel 2000 to 2003? - Next message: Mark Graesser: "RE: Percentages" - Previous message: Dennis: "? Object, Method, Property to evaluate cell contents" - In reply to: Davis: "Benefits of upgrading to Excel2003" - Messages sorted by: [ date ] [ thread ]
http://www.tech-archive.net/Archive/Excel/microsoft.public.excel.misc/2004-03/7890.html
crawl-002
refinedweb
3,503
50.16
Red, green, and refactor. Let’s fix the copy-pasted rotateBy. We can extract out common parts by simply accepting a function Piece => Piece: def moveLeft() = transformPiece(_.moveBy(-1.0, 0.0)) def moveRight() = transformPiece(_.moveBy(1.0, 0.0)) def rotateCW() = transformPiece(_.rotateBy(-math.Pi / 2.0)) private[this] def transformPiece(trans: Piece => Piece): this.type = { validate( trans(currentPiece), unload(currentPiece, blocks)) map { case (moved, unloaded) => blocks = load(moved, unloaded) currentPiece = moved } this } This gets rid of the moveBy and rotateBy in a single shot! Run the tests again to make sure we didn’t break anything. [info] Passed: : Total 4, Failed 0, Errors 0, Passed 4, Skipped 0 Stage class is shaping up to be a nice class, but I really don’t like the fact that it has two vars in it. Let’s kick out the states into its own class so we can make Stage stateless. case class GameState(blocks: Seq[Block], gridSize: (Int, Int), currentPiece: Piece) { def view: GameView = GameView(blocks, gridSize, currentPiece.current) } Let’s define a newState method to start a new state: def newState(blocks: Seq[Block]): GameState = { val size = (10, 20) def dropOffPos = (size._1 / 2.0, size._2 - 3.0) val p = Piece(dropOffPos, TKind) GameState(blocks ++ p.current, size, p) } We can now think of each “moves” as transition from one state to another instead of calling methods on an object. We can tweak the transformPiece to generate transition functions: val moveLeft = transit { _.moveBy(-1.0, 0.0) } val moveRight = transit { _.moveBy(1.0, 0.0) } val rotateCW = transit { _.rotateBy(-math.Pi / 2.0) } private[this] def transit(trans: Piece => Piece): GameState => GameState = (s: GameState) => validate(s.copy( blocks = unload(s.currentPiece, s.blocks), currentPiece = trans(s.currentPiece))) map { case x => x.copy(blocks = load(x.currentPiece, x.blocks)) } getOrElse {s} private[this] def validate(s: GameState): Option[GameState] = { val size = s.gridSize def inBounds(pos: (Int, Int)): Boolean = (pos._1 >= 0) && (pos._1 < size._1) && (pos._2 >= 0) && (pos._2 < size._2) if (s.currentPiece.current map {_.pos} forall inBounds) Some(s) else None } This feels more functional style. The type signature makes sure that transit does in fact return a state transition function. Now that Stage is stateless, we can turn it into a singleton object. The specs needs a few modification: import com.eed3si9n.tetrix._ import Stage._ val s1 = newState(Block((0, 0), TKind) :: Nil) def left1 = moveLeft(s1).blocks map {_.pos} must contain(exactly( (0, 0), (3, 17), (4, 17), (5, 17), (4, 18) )).inOrder def leftWall1 = sys.error("hmmm") // stage.moveLeft().moveLeft().moveLeft().moveLeft().moveLeft(). // view.blocks map {_.pos} must contain(exactly( // (0, 0), (0, 17), (1, 17), (2, 17), (1, 18) // )).inOrder def right1 = moveRight(s1).blocks map {_.pos} must contain(exactly( (0, 0), (5, 17), (6, 17), (7, 17), (6, 18) )).inOrder def rotate1 = rotateCW(s1).blocks map {_.pos} must contain(excactly( (0, 0), (5, 18), (5, 17), (5, 16), (6, 17) )).inOrder The mutable implementation of moveLeft returned this so we were able to chain them. How should we handle leftWall1? Instead of methods, we now have pure functions. These can be composed using Function.chain: def leftWall1 = Function.chain(moveLeft :: moveLeft :: moveLeft :: moveLeft :: moveLeft :: Nil)(s1). blocks map {_.pos} must contain(exactly( (0, 0), (0, 17), (1, 17), (2, 17), (1, 18) )).inOrder Function.chain takes a Seq[A => A] and turns it into an A => A function. We are essentially treating a tiny part of the code as data.
http://eed3si9n.com/tetrix-in-scala/refactoring.html
CC-MAIN-2017-22
refinedweb
595
70.9
[bionic]mlx5: reading SW stats through ifstat cause kernel crash Bug Description Description of problem: Attempting to read SW stats (ifstat -x cpu_hit) on a system with probed VFs will crash the system. How reproducible: Always Steps to Reproduce: 1. Create a VF echo 1 > /sys/bus/ 2. read SW stats: ifstat -x cpu_hit Actual results: System will crash Expected results: No system crash Additional info: The reason for the crash is insufficient check on the helper that determines if a netdev is vf rep. will crash: $ ifstat -x cpu_hit will not crash: $ ifstat -x cpu_hit $VF_REP $ ifstat -x cpu_hit $UPLINK_REP We already have a fix for this issue, and it is accepted upstream. https:/ CVE References I built a test kernel with commit 8ffd569aaa81. The test kernel can be downloaded from: http:// Can you test this kernel and see if it resolves this bug? Note about installing test kernels: * If the test kernel is prior to 4.15(Bionic) you need to install the linux-image and linux-image-extra .deb packages. * If the test kernel is 4.15(Bionic) or newer, you need to install the linux-modules, linux-modules-extra and linux-image- Thanks in advance! Thank you for your concern. I already sent the patch to the canonical kernel mailing list, and waiting for them to review it. Thanks, Talat The patch fixes the issue and we tested it. Thanks, Talat Setting to fix-released for devel as the patch was part of v4.18 upstream and by that included in Cosmic/18.10-42.45 --------------- linux (4.15.0-42.45) bionic; urgency=medium * linux: 4.15.0-42.45 -proposed tracker (LP: #1803592) * [FEAT] Guest-dedicated Crypto Adapters (LP: #1787405) - KVM: s390: reset crypto attributes for all vcpus - KVM: s390: vsie: simulate VCPU SIE entry/exit - KVM: s390: introduce and use KVM_REQ_ - KVM: s390: refactor crypto initialization - s390: vfio-ap: base implementation of VFIO AP device driver - s390: vfio-ap: register matrix device with VFIO mdev framework - s390: vfio-ap: sysfs interfaces to configure adapters - s390: vfio-ap: sysfs interfaces to configure domains - s390: vfio-ap: sysfs interfaces to configure control domains - s390: vfio-ap: sysfs interface to view matrix mdev matrix - KVM: s390: interface to clear CRYCB masks - s390: vfio-ap: implement mediated device open callback - s390: vfio-ap: implement VFIO_DEVICE_ - s390: vfio-ap: zeroize the AP queues - s390: vfio-ap: implement VFIO_DEVICE_RESET ioctl - KVM: s390: Clear Crypto Control Block when using vSIE - KVM: s390: vsie: Do the CRYCB validation first - KVM: s390: vsie: Make use of CRYCB FORMAT2 clear - KVM: s390: vsie: Allow CRYCB FORMAT-2 - KVM: s390: vsie: allow CRYCB FORMAT-1 - KVM: s390: vsie: allow CRYCB FORMAT-0 - KVM: s390: vsie: allow guest FORMAT-0 CRYCB on host FORMAT-1 - KVM: s390: vsie: allow guest FORMAT-1 CRYCB on host FORMAT-2 - KVM: s390: vsie: allow guest FORMAT-0 CRYCB on host FORMAT-2 - KVM: s390: device attrs to enable/disable AP interpretation - KVM: s390: CPU model support for AP virtualization - s390: doc: detailed specifications for AP virtualization - KVM: s390: fix locking for crypto setting error path - KVM: s390: Tracing APCB changes - s390: vfio-ap: setup APCB mask using KVM dedicated function - s390/zcrypt: Add ZAPQ inline function. - s390/zcrypt: Review inline assembler constraints. - s390/zcrypt: Integrate ap_asm.h into include/asm/ap.h. - s390/zcrypt: fix ap_instructions - s390/zcrypt: remove VLA usage from the AP bus - s390/zcrypt: Remove deprecated ioctls. - s390/zcrypt: Remove deprecated zcrypt proc interface. - s390/zcrypt: Support up to 256 crypto adapters. - [Config:] Enable CONFIG_ * Bypass of mount visibility through userns + mount propagation (LP: #1789161) - mount: Retest MNT_LOCKED in do_umount - mount: Don't allow copying MNT_UNBINDABLE| * CVE-2018-18955: nested user namespaces with more than five extents incorrectly grant privileges over inode (LP: #1801924) // CVE-2018-18955 - userns: also map extents in the reverse map to kernel IDs * -- Thadeu Lima de Souza Cascardo <email address hidden> Thu, 15 Nov 2018 17:01:46 ... This bug990.
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1799049
CC-MAIN-2019-18
refinedweb
660
56.89
Log message: Remove references to Python 3.4 Log message: py-zeep: updated to 3.3.1 3.3.1: - Fix issue with empty xsd:import statements on Python 2.7 3.3.0: - Extend the force_https flag to also force loading xsd files from https when a http url is encountered from a https domain - Fix handling recursive xsd imports when the url's are enforced from http to https. - Fix reference attribute when using the Binary Security Token - Add support for the WSAM namespace 3.2.0: - Fix abstract message check for NoneType before attempting to access parts - Improve support for 'Chameleon' XSD schemas - Fix resolving qualified references - Fix issue with duplicate soap:body tags when multiple parts used - Fix Choice with unbound Any element - Add xsd_ignore_sequence_order flag - Add support for timestamp token in WSS headers - Accept strings for xsd.DateTime 3.1.0: - Fix SSL issue on with TornadoAsyncTransport - Fix passing strict keyword in XML loader - Update documentation 3.0.0: This is a major release, and contains a number of backwards incompatible changes to the API. - Refactor the settings logic in Zeep. All settings are now configured using the zeep.settings.Settings() class. - Allow control of defusedxml settings via zeep.Settings - Add ability to set specific http headers for each call - Skip the xsd:annotation element in the all:element - Add Settings.force_https as option so that it can be disabled - Strip spaces from QName's when parsing xsd's - Fix DateTime parsing when only a date is returned - Fix handling of nested optional any elements - Check if part exists before trying to delete it Log message: py-twine: updated to 1.13.0 Twine is a utility for publishing Python packages on PyPI. It provides build system independent uploads of source and binary distribution artifacts for both new and existing projects. Log message: py-zeep: updated to 2.5.0 2.5.0: - Fix AnyType value rendering by guessing the xsd type for the value - Fix AnySimpleType.xmlvalue() not implemented exception - Add __dir__ method to value objects returned by Zeep - Don't require content for 201 and 202 status codes - Fix wheel package by cleaning the build directory correctly - Handle Nil values on complexType with SimpleContent elements - Add Client.namespaces method to list all namespaces available - Improve support for auto-completion Log message: py-zeep: update to 2.4.0 2.4.0: Add support for tornado async transport via gen.coroutine Check if soap:address is defined in the service port instead of raising an exception Update packaging (stop using find_packages()) Properly handle None values when rendering complex types Fix generating signature for empty wsdl messages Support passing strings to xsd:Time objects Log message: Reset maintainer Log message: Add python-3.6 to incompatible versions. Log message: Mark as not for Python 3.x due to devel/py-cached-property
http://pkgsrc.se/net/py-zeep
CC-MAIN-2019-30
refinedweb
477
53.51
Java 8 Hashmaps, Keys and the Comparable Interface A quick 101 for what to do in Java 8 in case of hash collisions.. * **/ Java 8 is coming with a lot of improvements/enhancements compared to the previous version. There are pretty many classes that have been updated, HashMap—as one of the most used data structure—is no exception. In this post, we are going to discover a new, important feature that Java 8 brings to us in case of hash collisions. First of all, what is the easiest way to create collisions in a HashMap? Of course, let’s create a class that has its hash function messed up in the worst way possible: a hashCode() implementation that returns a constant value. I usually ask people during technical interviews what happens in such case. Very many times the candidates think that the map will contain one and only one entry, as an older entry will always be overwritten by the newer one. That, of course, is not true. Hash collisions do not cause a HashMap to overwrite entries, that only happens if we try to put two entries with keys equal based on their equals() method. Entries with non-equal keys and the same hash code will end up in the same hash bucket in some kind of data structure. See below an example for Java 7: To begin with, let’s write a small application to simulates those collisions. The demo application here is a bit exaggerated in the sense that it generates far more collisions than we’d normally have in a real-world application, but still, it’s important to prove a point. In our example, we are going to use a Person object as keys in the map, while the values will be Strings. Let’s see the implementation for the Person object, with a first name, a last name, and an Id in the form of a UUID object: public class Person { private String firstName; private String lastName; private UUID id; public Person(String firstName, String lastName, UUID id) { this.firstName = firstName; this.lastName = lastName; this.id = id; } @Override public int hashCode() { return 5; } @Override public boolean equals(Object obj) { // ... pertty good equals here taking into account the id field... } } And now let’s generate some collisions: private static final int LIMIT = 500_000; private void fillAndSearch() { Person person = null; Map<Person, String> map = new HashMap<>(); for (int i=0;i<LIMIT;i++) { UUID randomUUID = UUID.randomUUID(); person = new Person("fn", "ln", randomUUID); map.put(person, "comment" + i); } long start = System.currentTimeMillis(); map.get(person); long stop = System.currentTimeMillis(); System.out.println(stop-start+" millis"); } I ran this code on a pretty good machine, and it took 2.5 hours to complete, with the final search taking ~40 milliseconds. Now, without any prior explanation, let’s make a tiny change to our Person class: let’s make it implement Comparable<Person>, and add the following method: @Override public int compareTo(Person person) { return this.id.compareTo(person.id); } Now, let’s run the map filler method one more time. On my machine, it completes in under 1 minute, with a final searching of zero milliseconds, aka we’ve made it 150 times faster! As I mentioned, Java 8 comes with many improvements and that also affects HashMap. In Java 7, colliding entries were kept in a linked list inside the bucket. Starting from version 8, if the number of collisions is higher than a certain threshold (8), and the map’s capacity is larger than another trhreshold (64), the HashMap implementation will convert that linked list into a binary tree. Aha! So in case of non-comparable keys the resulting tree is unbalanced, while in the other case it’s better balanced, right? Nope. The tree implementation inside the HashMap is a Red-Black tree, which means it will always be balanced. I even wrote a tiny reflection-based utility to see the height of the resulting trees. For 50.000 entries (I had no courage to let it run longer) both versions (with and without Comparable keys) yielded a tree height of 19. Ok, then how come there’s such a big difference? When the HashMap implementation tries to find the location of a new entry in the tree, first checks whether the current and the new values are easily comparable (Comparable interface) or not. In the latter case, it has to fall back to a comparison method called tieBreakOrder(Object a, Object b). This method tries to compare the two object based on class name first, and then using System.identityHashCode. This is really tedious for 500.000 entries that we create in the test. However, when the key implements Comparable, the process is much simpler. The key itself defines how it compares to other keys, so the whole insertion/retrieval process speeds up. It’s worth mentioning that the same tieBreakOrder method is used in case when two Comparable keys turn out to be equal according to the compareTo method (aka the method returns 0). In retrospective: using Java 8 HashMaps buckets with too many entries will be treefied once a threshold value is met. The resulting tree is a balanced Red-Black tree in which keys that implement Comparable can be inserted and/or removed much easier than non-comparable ones. Usually for buckets with not so many collisions this whole Comparable / not comparable stuff will make no practical difference, but I hope now it’s a bit more clear how HashMaps work internally. Published at DZone with permission of Tamás Györfi, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/java-8-hashmaps-keys-and-the-comparableinterface
CC-MAIN-2021-39
refinedweb
946
61.97
Clustering has already been discussed in plenty of detail, but today I would like to focus on a relatively simple but extremely modular clustering technique, hierarchical clustering, and how it could be applied to ETFs. We’ll also be able to review the Python tools available to help us with this. Clustering suitability First of all, ETFs are well suited for clustering, as they are each trying to replicate market returns by following a market’s index. We can therefore expect to find clear clusters. The advantage of using hierarchical clustering here, is that it allows us to define the precision of our clustering (number of clusters) after the algorithm has run. This is a clear advantage compared to other unsupervised methods, as it will ensure impartial and equidistant clusters, which is important for good portfolio diversification. The data used here are the daily series of the past weeks’ returns. This ensures stationarity and allows for better series comparison. The prices used are all in US dollars from September 2011 to December 2017, to try and capture different market conditions while keeping a high number of ETFs (790). How to begin The first step is to calculate all the pairwise distances between the series. The Scipy package provides an efficient implementation to do this with the pdist function, and includes many distances. Here I compared all the applicable ones to calculate distances between 2 numerical series. To compare them, I decided to use the cophenetic distance, which is (very briefly) a value ranging from 0 to 1 and allows us to determine how well the pairwise distances between the series compare (correlate) to their cluster’s distance. A value closer to 1 would result in better clustering, as the clusters are able to preserve original pairwise distances. import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline from scipy.spatial.distance import squareform, pdist distances=["euclidean", "sqeuclidean", "cityblock", "cosine", "hamming", "chebyshev", "braycurtis", "correlation"] for distance in distances: dist = pdist(returns.T.values, metric=distance) print(distance, hcl.cophenet(hcl.ward(dist), dist)[0]) euclidean 0.445105212991 sqeuclidean 0.636766347254 cityblock 0.449263737373 cosine 0.852746101706 hamming -0.148087351237 chebyshev 0.480135889737 braycurtis 0.486277543793 correlation 0.850386271327 Here, the cosine (as well as the correlation) distances worked best. It’s then time to apply the agglomerative hierarchical clustering, which is done by the linkage function. There are a few methods for calculating between cluster distances, and I invite you to read further about them in the description of the linkage function. In this case, I will use the ward method, which minimises the overall between-cluster distance using the Ward variance minimisation algorithm, and is often a good default choice. import scipy.cluster.hierarchy as hcl dist = pdist(returns.T.values, metric="cosine") Z = hcl.ward(dist) The resulting matrix Z is informing each step of the agglomerative clustering by informing the first two columns of which cluster indices were merged. The third column is the distance between those clusters, and the fourth column is the number of original samples contained in that newly merged cluster. Here are the 3 last merges: print(np.array_str(Z[-3:], precision=1)) [[ 1522. 1564. 5.6 112. ] [ 1574. 1575. 7. 678. ] [ 1576. 1577. 15.8 790. ]] Visualising the clusters A good way to visualise this is with a dendrogram, which shows at which inter-cluster distance each merge occurred. From there, it is possible to select a distance where clusters are clear (indicated with the horizontal black lines). plt.figure(figsize=(25, 10)) plt.title('Hierarchical Clustering Dendrogram') plt.xlabel('sample index') plt.ylabel('distance') hcl.dendrogram( Z, leaf_rotation=90., leaf_font_size=8. ) plt.axhline(y=8, c='k') plt.axhline(y=3, c='k') plt.axhline(y=4.5, c='k') plt.show() As we can see, the clear number of clusters appear to be 2, 4 and 6 (depending on the desired level of detail). Another, more automatic, way of selecting the cluster number is to use the Elbow method and pick a number where the decrease of inter-cluster distance is the highest, which seems to occur at 2 clusters. However, this is probably too simplistic and we can also see this occur at 4 and at 6, as shown by the second derivative of those distances (in orange). last = Z[-12:, 2] last_rev = last[::-1] idxs = np.arange(1, len(last) + 1) plt.plot(idxs, last_rev) acceleration = np.diff(last, 2) # 2nd derivative of the distances acceleration_rev = acceleration[::-1] plt.plot(idxs[:-2] + 1, acceleration_rev) plt.ylabel("Distance") plt.xlabel("Number of cluster") plt.show() Plotting the results In order to plot the results, it is necessary to carry out some dimensionality reduction. For this, I have decided to use TSNE as it’s particularly efficient to plot over 2 dimensions. However, it’s a good idea to first reduce the dimensions to a reasonable number, using PCA, when the number of features is too high. This is certainly the case for time series, where each daily return is considered a dimension. from sklearn.decomposition import PCA pca = PCA().fit(returns.T) plt.plot(np.arange(1,len(pca.explained_variance_ratio_)+1,1)[:200],pca.explained_variance_ratio_.cumsum()[:200]) plt.ylabel("Explained variance") plt.xlabel("Number of component") plt.show() In order to get a minimum of 95% of the variance explained, it is necessary to use a minimum of 80 components. np.where(pca.explained_variance_ratio_.cumsum()>0.95)[0][0]+1 80 With those reduced dimensions, we can now use the TSNE and reduce it further to 2 dimensions. I highly recommend this read, to see how to fine-tune it (the article has some very nice interactive visualisation). from sklearn.manifold import TSNE pca = PCA(n_components=80).fit_transform(returns.T) X2 = TSNE(n_components=2, perplexity=50, n_iter=1000, learning_rate=50).fit_transform(pca) Finally, the clusters seem to be relatively cohesive when plotted on a two-dimensional space. So assets of each cluster are expected to behave similarly across observed market conditions since 2011. This assumption needs to be taken with a pinch of salt of course, but can help create a diversified portfolio by selecting assets from each cluster. plt.figure(figsize=(25, 10)) k=4 clusters=fcluster(Z, k, criterion='maxclust') plt.subplot(1,2,1) ax = plt.gca() ax.scatter(X2[:,0], X2[:,1], c=clusters, cmap='Paired', alpha = 0.3) ax.set_title(str(k) + " clusters") ax.set_xlabel("tsne-1") ax.set_ylabel("tsne-2") k=6 clusters=fcluster(Z, k, criterion='maxclust') plt.subplot(1,2,2) ax = plt.gca() ax.scatter(X2[:,0], X2[:,1], c=clusters, cmap='Paired', alpha = 0.3) ax.set_title(str(k) + " clusters") ax.set_xlabel("tsne-1") plt.show() from collections import Counter Counter(clusters) Counter({1: 36, 2: 76, 3: 41, 4: 55, 5: 174, 6: 408})
https://quantdare.com/hierarchical-clustering-of-etfs/
CC-MAIN-2019-18
refinedweb
1,140
50.63
You can subscribe to this list here. Showing 25 50 100 250 results of 222 Hi, Hello everyone, Your response to this survey is totally voluntary and anonymous, but it would be very helpful if you do respond! Your answers will influence the future of Webware. You may respond to the survey by clicking the following URL: When you arrive at the Advanced Survey homepage, type in survey number 4184 in the "Take A Survey" box. At the end of the survey you can see the results. -- Chuck I'm planning on reimplementing an old program, and I thought "oh, I'll use users, maybe I'll play around with generalizing the user management." Just the words going through my mind, yep... Then I thought about how I'd implement it, and I quickly realized that I'd implement it as a wrapper around a database row. I'd wrap it in SQLObject so that it would interact well with my other database objects -- if I was using MK I'd want it to be an MK object, or a ZODB object, or whatever. This is *very* important to me, I don't want some crappy user object that's so generic it doesn't work well with my other objects. Users are usually closely tied to everything else in the application.. Ian On Thu, 27 Feb 2003 mso@... wrote: >? There's going to be online registration for the sprints as well, but it's not up yet (or, at least, it wasn't last I heard). I think it's coming soon, and it'll probably be posted to the sprints page on the PyCon wiki. When I hear about it, I'll post it here, too. > From:: > On Mon, 24 Feb 2003, Geoffrey Talvola wrote: > > > Another approach is to avoid threads and use a database > > adapter specifically > > designed for integration with Twisted's event loop (I don't > > know if such > > things exist, but it seems that Twisted has a little of > > everything...). But > > in that case you're back to "contorting" your code to fit > > the async model. > > Looking at some of the servlets I'm using, with significant > > amounts of code > > and potentially many SQL queries involved, I shudder to think at the > > convolutions I'd have to put into place to make them fit a > > purely async > > model. It just doesn't fit my brain and would make the > > code significantly > > more complicated and less understandable and maintainable. > > All this talk led me to take another look at Twisted :) I won't like > it[1], but then, that's because I don't like anything :) (ask Chuck). > > However, while I was reading the docs, I came across "Twisted > Enterprise", > which is an asynchronous wrapper around the DB API, with its own > connection pooling and what-not: > > > > This might answer your SQL question?. On Tue, Feb 25, 2003 at 01:20:34PM -0600, Ian Bicking wrote: | > flexible. I say this beacuse I was able to do path variables | > out-of-the-box (2 hours) with this tool; in Webware I've been | > maintaining a patch for over a year, and this patch isn't ideal. | | Can you give a pointer to that? I'm interesting in changing how paths | are parsed in Webware, but I've never felt confident The Twisted model has the notion of a resource. Let's start with a simple working example: from twisted.web.resource import Resource from twisted.internet import reactor from twisted.web.server import Site from twisted.web.static import File class DynamicRequest(Resource): def isLeaf(self): return true def render(self, req): req.content_type = 'text/plain' return "uri: %s" % req.uri def run(): root = Resource() root.putChild("dynamic", DynamicRequest()) root.putChild("static",File(".")) site = Site(root) reactor.listenTCP(8081,site) reactor.run() The interface of resource is a collection, and has a getChild( pathsegment ) method is called for every segment in the path. So, /foo/bar/baz is somewhat equivalent to the following: site.resource.getChild('foo').getChild('bar').getChild('baz') Twisted accomplishes this magic in a "private" function called getChildForRequest which iterates over each path segment. For the current resource it first checks a mapping to see if any static configured children (putChild) exist, and if a static child isn't found, it calls getChild() to find the next child in order. Of course, the iteration stops when isLeaf() is true. Twisted comes with many built in Resources, including a really nice one for doing redirects (Redirect) and dynamically serving up a file system (File). It also seems to have ones for handling virtual hosts, and sessions. Very cool. For more detail... public/twisted.web.resource.Resource.html On Tue, Feb 25, 2003 at 08:18:25PM +0100, Paul Jongsma wrote: | Last night I was playing with Twisted to get some insight into it, | one of the things which I tried to figure out was how to do | path variables. | | Would you mind sharing your solution with me? The docs on | Twisted are sparse and I haven't been able to figure it out | yet.. Assume that your path is something like... /key:val/another:val/<servlet> At any point in the tree, you should be able to add the following "resource" which will eat all items in the path having a colon. class PathArgs(Resource): def getChild(self,path,request): if not(hasattr(request,'pathargs')): request.pathargs = {} pair = path.split(':') if 2 == len(pair): request.pathargs[pair[0]] = pair[1] return self return Resource.getChild(self,path,request) To try it out, replace "root = Resource()" with "root = PathArgs()" in the code above. Note, you should be able to use this resource anywhere in the tree... I'm completely astonished with the power/simplicity of this mechanism. Serious Kudos to the brain who discovered it... that such a small amount of code can add a "pathargs" attribute to each request in a way that is completely pluggable is just spectacular! Clark.
http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200302&page=1
CC-MAIN-2015-06
refinedweb
995
72.26
No doubt you've noticed some patterns in the non-zero multiples of 9: 9, 18, 27, 36, 45,... One thing to notice is that if you (repeatedly) add up all the digits of a multiple of 9, you always get 9 as your answer. This works immediately for many multiples of 9, like 9*14 = 126 (1 + 2 + 6 = 9), for others you need to keep squashing - if the first digit sum itself has more than one digit, sum those digits and repeat until you get a single digit answer. For example 9 * 42 = 378 (3 + 7 + 8 = 18, which has two digits, so keep squashing: 1 + 8 = 9). All multiples of 9 squash down to 9, which is neat. More importantly this also works in the other direction: any positive integer that squashes to 9 is a multiple of 9. This makes squashing an easy divisibility test for 9 and makes it easy to find multiples of 9 (is 359 a multiple of 9? No, but 369 is, and so is 459). Also: Any number made by rearranging the digits of a multiple of 9 will also be a multiple of 9 (re-arranging the digits won't change the squash value). So, since I know that 882 is a multiple of 9, I also know that 288 and 828 are also multiples of 9. Can you always squash a number? Could there ever be a number that gets bigger when you sum its digits? One (wordy) argument goes like this: To obtain a contradiction, assume there are some positive integers greater than 9, whose digit sums are equal to or greater than the original number. Such a number would be a problem for us: we need to be sure that any number with two or more digits will always have a digit sum less than itself (so we can keep doing digit sums until the number is squashed down to a single digit). Choose n to be the smallest such troublesome number. Suppose the digit sum of n is another number k such that n <= k. Now we are going to build a new number m by taking the digits of n and changing one of them: chose a non-zero digit in a position bigger than the ones place and decrease it by 1. For example, if our number n was 567 (which it isn't) our new number m could be 557. Now m is at least 10 less than n, but its digit sum is only one less than k (since only one digit was decreased by 1). Now m <= n -10 < n - 1 <= k-1, and so m is also less than its digit sum (k - 1). But n was chosen to be the smallest number with this property, and m is definitely smaller than n. So we have a contradiction: it cannot happen that the digit sum of a number is greater than the number itself. This means that it is safe to squash: you will always get smaller and smaller numbers until you get down to the single digits. Squashing multiples of 9 was interesting: What about patterns in other multiples? Consider positive multiples of 3: 3, 6, 9, 12, 15, 18, 21, 27, ... they don't squash down to the same value, but if you try it out you'll notice a pattern: 3, 6, 9, 3, 6, 9, 3, 6, 9, .... It's easy to see this if you write the multiples in a 3 column table. In the chart above, the first column entries all squash to 3, the second to 6, and the third to 9. Some other multiples produces similar charts, for example 6 and 12. (I noticed these squashing patterns when looking at material from the JUMP math program for grades 3 and 4, where tables like these are used to explore patterns for learning multiplication facts.) Not all integers will have their multiples fit into this pattern. For example, 4 needs a 9 column table to show a squash pattern. Just a brief note on the Java utility below: the while(true) statement in the squash() method relies on our assumption that digit sums get smaller - otherwise we'd potentially have an infinite loop. The digits() method is simple example of recursion. package squash; import java.util.ArrayList; import java.util.List; public class SquashCalculator { public List if (start == 0) { return new ArrayList } int val = start % 10; List recur.add(val); return recur; } public Integer sum(List Integer sum = 0; for (Integer i: list) { sum += i; } return sum; } public Integer digitSum(Integer i) { return sum(digits(i)); } public Integer squash(Integer n) { if (n == 0) { return 0; } Integer current = n; while(true) { current = digitSum(current); if (digits(current).size() == 1) break; } return current; } } Here's what you'll observe squashing the numbers 1 - 40: So, the squash function maps the integers onto a structure like the one on the right below, that is very close to taking the number "mod 9". Putting the integers from 1 to 36 in a 9 column chart like the "multiples of 4" above shows this also. This suggests that there is a direct formula for the squash of a number close to its mod 9 value. It turns out this can be expressed as: One aspect of this is that n and squash (n) are congruent modulo 9 (which means that if you divide n by 9, or the squash of n by 9, you will get the same remainder). This is the important relationship that makes squashing useful. It's kind of amazing that you can take a number and completely re-arrange its digits, or sum up all its digits, and still retain something about the original number (the remainder after dividing by 9). When you do something this violent to a number it's surprising that some information about the original number remains. This is why every number that can be squashed to 9 is divisible by 9 (both are equal to 0 mod 9). This also helps to explain the number of columns in the tables above. For a number m whose multiples we are playing with, if m is divisible by 3 (like 3, 6, and 12), its multiples will have the same squash with a period of 3 (and will repeat in a 3 column table). If m is not divisible by 3 (the only factor 9 other than 1 and itself), then the squash of the multiples of m will have a period of 9 (and will repeat in a 9 column table). If m is a multiple of 9, then its multiples will be multiples of 9 also, and their squash will always be 9. We also get a divisibility test for 3: If a number's squash value is 3, 6, or 9, then that number is divisible by 3. For example, suppose n squashes to 6. That means that n is congruent to 6 mod 9, which means that there is some positive integer k such that n = 9k + 6. Since the right hand side of that equation is divisible by 3 (dividing the right by 3 gives 3k +2), so is the left hand side. But wait, there is more. Calculating squash values mod 9 has a short-cut: when you are adding up all the digits, you can throw out any multiples of 9, since they will always end up contributing zero to the final answer (because multiples of 9 have a remainder of 0 when divided by 9). This calculation is part of an error-checking technique called "casting out nines" which can be used to check arithmetic. When casting out nines, you essentially squash (ignoring 9s and multiples of 9s) the inputs and outputs of your calculation, and if they are different then you know you made a mistake. If you want to learn more about this (and there is a lot more), you should Google "digital root" which is the standard name for what I've been calling "squash."
https://www.mathrecreation.com/2014/09/squashing-multiples.html
CC-MAIN-2019-30
refinedweb
1,346
65.86
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How to access calculated fields in the create method Hi In the write method I can access the current created object using the browse orm method as follows: o = self.browse(cr,uid,ids,context=context)[0] I need to do the same thing in the 'create' method as I need to access some function fields that are not listed in the vals parameter of the create method, pls advise Hi, As you access browse inside write method, you can also access browse inside create method as like below. Inside Odoo V8. def create(self, vals): new_object = super(self,class_name).create(vals) # create method in odoo V8 return browse object instead of integer ID. new_object.xxx_field #you can get any of the field as like this. return new_object Inside OpenERP V7. def create(self, cr, uid, vals, context={}) new_id = super(self, class_name).create(cr, uid, vals, context=context) new_obj = self.browse(cr, uid, new_id, context=context) new_obj.xxx_field #you can access any field as like this. return new_id I hope it is helpful to you. Thx for answer, I found it useful and applied it in my code About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-to-access-calculated-fields-in-the-create-method-85878
CC-MAIN-2018-09
refinedweb
243
66.33
Database Integration with Microsoft BizTalk Server 2002Click here to download sample - BTS_WP_DBIntegration Samples.exe. (687 KB) Click here to download sample - XML for SQL.exe. (1.06 MB) Click here to download sample - setup.exe. (4.53 MB) Scott Woodgate Microsoft Corporation May 2002 Applies to: Microsoft® BizTalk® Server 2002 Microsoft SQL Server™ 2000 Summary: How to integrate BizTalk Server 2002 with databases in general, and with Microsoft SQL Server 2000 in particular, and how to process XML into and out of a database. (39 printed pages) Download Bts_wp_dbintegration_samples.exe. Contents Introduction BizTalk Server Database Integration Using ADO in BizTalk Server Database Integration Leveraging SAX2 for Large Data Files Integration with SQL Server Overview of XML and SQL Server Using FOR XML and OPENXML Using Updategrams Using SQL Server XML View Mapper Leveraging Stored Procedures and DTS in SQL Server 2000 References Introduction Business-to-business data processing requires data interchange that uses XML as the ubiquitous, extensible, and platform-independent transform format. The challenge is how to reconcile the requirements of relational data stores and hierarchical XML data. This article focuses on providing reusable samples and techniques for integrating Microsoft® BizTalk® Server 2002 and databases, with specific reference to leveraging Microsoft SQL Server™ 2000. A number of integration options are discussed. These include: - Using Microsoft ActiveX® Data Objects (ADO) for integration with generic databases - Leveraging the most recent version of the Simple API for XML (SAX2) for integrating large data files with generic databases - Using the FOR XML clause and the OPENXML keyword for integration with SQL Server 2000 - Using Updategrams for integration with SQL Server 2000 - Using SQL Server XML View Mapper for integration with SQL Server 2000 - Leveraging Data Transformation Services (DTS) in SQL Server 2000 Throughout this paper, samples are provided to assist in demonstrating and explaining the concepts. These samples follow a common structure to enable you to make comparisons. Included with this article is a Samples directory (contained in the download) that contains several folders. Each folder contains sample files for a specific section. For example, sample files for the section "Leveraging SAX2 for Large Data Files" are contained in the directory Samples\SAX2. The samples assume that BizTalk Server is installed on your C drive, and you should unzip the Samples directory to your C drive. If you have installed BizTalk Server on another drive, you need to unzip the Samples directory to that drive and update the file paths in the samples accordingly. Most samples involve BizTalk Server, SQL Server, and service components such as application integration components or Microsoft Windows® Script Components. Microsoft Visual Basic® Scripting Edition (VBScript) and ADO applications are also used extensively in the samples. Many samples use SQLXML 3.0 and SQL Server XML View Mapper 1.0. You will need to download both SQLXML 3.0 and SQL Server XML View Mapper 1.0 from the Downloads section of the MSDN® Web site and install them on your computer before running the samples. Stored procedures have been used whenever possible. It is preferable to use stored procedures for data access because of their performance advantages. In addition, it is relatively simple to execute them from Visual Basic applications by using ADO Command objects. All examples have been simplified to make the code and the process simple to understand and follow. They are not meant to represent examples of best practices for coding. BizTalk Server Database Integration BizTalk Server can integrate with many types of databases. One of the most useful tools for BizTalk Server database integration is ActiveX Data Objects, or ADO. Other tools, including the Microsoft Simple API for XML (SAX) and the XML Document Object Model (XML DOM), can also be leveraged in BizTalk Server database integration. BizTalk Server can achieve the highest level of integration with SQL Server 2000 databases. This is due to the advanced XML support that SQL Server 2000 provides. The second part of this white paper focuses on BizTalk Server integration with SQL Server 2000. Using ADO in BizTalk Server Database Integration ADO is a fast, powerful, and convenient mechanism for interacting from any language with many different databases, such as SQL Server databases, Oracle databases (using the Microsoft OLE DB Provider for Oracle), and DB2 or VSAM data sources (for example, using the OLE DB drivers packaged in Microsoft Host Integration Server 2000). ADO recordsets allow you to navigate the records easily, and to apply filters and bookmarks. They also provide sorting, automatic pagination, and persistence. Recordsets can be efficiently marshaled across tiers to their native and extremely compact binary format—the Advanced Data TableGram (ADTG) format. ADO versions 2.5 and later also provide capabilities for retrieving data in XML format by using the adPersistXML option as well as for executing XML query templates to perform database insert, delete, and update operations. The ADO XML support can be leveraged in BizTalk Server database integration. BizTalk Server 2002 uses ADO internally through the XLANG Scheduler Engine to save the state of XLANG schedules to the persistence database. You can also leverage ADO for BizTalk Server database integration through application integration components (AICs) inside BizTalk Messaging and through script or COM components inside BizTalk Orchestration. In BizTalk Server database integration, the ADO code is normally contained in AICs. Retrieving XML data by using the adPersistXML switch In ADO versions 2.5 and later, you can retrieve data by using normal Transact-SQL queries and persist the data as XML by using the adPersistXML option of the ADO recordset. The ADO recordset provides a Save method that persists data to a destination such as a file or an Active Server Pages (ASP) page. By default, the ADO recordset persists data in the ADTG format using the adPersistADTG option. To persist data as an XML document you need to execute the Save method using the adPersistXML option. The main steps for retrieving XML data are: - Connect to a data source using the ADO Connection object. - Instantiate the ADO Recordset object, and then open a SELECT query to retrieve data. - Persist the data as an XML document using the Save method of the Recordset together with the adPersistXML switch. Sample: using ADO This sample demonstrates how to retrieve XML data from the Customers table in the Northwind database by using the ADO adPersistXML option. Note All sample files for this section are included in the Samples\ADO directory. Step 1: define a Windows Script Component Open Notepad and type the following code: <?xml version="1.0"?> <component> <?component error="true" debug="true"?> <registration description="ADOXML" progid="ADOXML.WSC" version="1.00" classid="{71f81b28-4695-4220-bd77-c21abaca02cb}"> </registration> <public> <method name="GetXML"> <PARAMETER name="sCOnn"/> <PARAMETER name="sSQL"/> <PARAMETER name="sFileName"/> </method> </public> <script language="VBScript"> <![CDATA[ function GetXML(sConn, sSQL, sFileName) Dim cn, rs Const adPersistXML = 1 'Connect to DB and run SQL Set cn = CreateObject("adodb.Connection") cn.Open sConn 'Retrieve Data set rs = CreateObject("ADODB.Recordset") rs.Open sSQL, cn 'Persist data as XML if len(sFileName) > 0 then rs.Save sFileName, adPersistXML else rs.Save "c:\ADOXMLOut.xml", adPersistXML end If rs.Close cn.Close set rs = nothing set cn = nothing end function Save the file as ADOXML.wsc. Then register the Windows Script Component by right-clicking the .wsc file and clicking Register. You should see a dialog box saying that the component was registered successfully. Step 2: define a .vbs file Open Notepad and type the following code: Dim o Dim sSQL, sFile Set o = CreateObject("ADOXML.WSC") sSQL = "SELECT CustomerID, CompanyName, ContactName, Country FROM Customers WHERE CustomerID < 'B'" sFILE = inputbox("Enter a file name to save the XML document:") o.GetXML "Provider=sqloledb; Data Source=(local); Initial Catalog=Northwind; Trusted_Connection=Yes;", sSQL, sFile msgbox "XML string saved to: " & sFile Set o = nothing Save the file as TestADOXML.vbs. Step 3: run the program Now you can run the program by double-clicking the TestADOXML.vbs file, and entering a unique file name and path for saving the XML data. You can then view the XML data by browsing to the file, which will contain the following > <z:row <z:row <z:row <z:row </rs:data> </xml> Sample: consuming ADO-generated XML by using BizTalk Server The XML data retrieved by the ADO recordset uses a predefined format that BizTalk Server does not recognize. You can, however, convert the ADO-generated XML data into a format that can be consumed by BizTalk Server applications. ADO-generated XML combines schema and data in one XML file. From that XML file you can derive an XML data file and a schema that can be consumed by BizTalk Server. The following steps describe the conversion process, using as an example the ADO XML data shown in step 3 of the preceding section. Note All sample files for this section are included in the Samples\ADO\ADOforBTS directory. Step 1: create an XML data file Assume that you have saved retrieved XML data in a file named RetrievedADOXML.xml, and you want to create an XML data file with a root node of <Root>. Perform the following steps: - Open RetrievedADOXML.xml in Notepad. Remove all non-data portions of the file. - Change the tag <rs:data> to <Root>. Change the tag <z:row …> to <row …>. The contents now look like: <Root> <row CustomerID='ALFKI' CompanyName='Alfreds Futterkiste' ContactName='Maria Anders' Country='Germany'/> <row CustomerID='ANATR' CompanyName='Ana Trujillo Emparedados y helados' ContactName='Ana Trujillo' Country='Mexico'/> <row CustomerID='ANTON' CompanyName='Antonio Moreno Taquería' ContactName='Antonio Moreno' Country='Mexico'/> <row CustomerID='AROUT' CompanyName='Around the Horn' ContactName='Thomas Hardy' Country='UK'/> </Root> - Save the contents to a unique file, for example, ADOXMLData.xml. You have created an XML data file. Step 2: create an XML schema To create an XML schema, do the following: - Open BizTalk Editor. On the Tools menu, click Import, and then double-click the XDR Schema icon. In the Import XDR Schema dialog box, browse to the XML document RetrievedADOXML.xml and click Open. A schema with a root node <row> appears in the BizTalk Editor window. - Save the schema to a new file, for example, ADOXMLSchema.xml. This process creates a schema that BizTalk Server can recognize. The schema looks like: <?xml version="1.0"?> <!-- Generated by using BizTalk Editor on Sat, Nov 10 2001 04:45:40 PM – -> <!-- Microsoft Corporation (c) 2000 () --> <s:Schema <b:RecordInfo/> ... </s:ElementType> </s:Schema> Note that the schema has a root node <row>. We need to insert a new root node in the schema to match the XML data file. BizTalk Editor does not allow insertion of a new root node into an existing schema. However, we can work around this restriction by using a text editor such as Notepad. Close BizTalk Editor. - Open the schema ADOXMLSchema.xml in Notepad. In the <s:Schema . . .> line, change "Schema name" and "root_reference" from "row" to "Root". Then insert the following text into the schema: The new schema looks like: <?xml version="1.0"?> <!-- Generated by using BizTalk Editor on Sat, Nov 10 2001 04:45:40 PM – -> <!-- Microsoft Corporation (c) 2000 () --> <s:Schema <b:RecordInfo/> <s:element </s:ElementType> <s:ElementType <b:RecordInfo/> ... </s:ElementType> </s:Schema> - Save the schema to a file and then open the modified schema file in BizTalk Editor. Use the Validate Instance option on the Tools menu to validate the new schema against the XML data file (ADOXMLData.xml). Ensure that no errors occur. You have now created a data file and a schema that can be consumed by BizTalk Server. Executing XML queries by using ADO ADO can also be used to insert, update, and delete data against Microsoft SQL Server 2000 by using XML query templates. This is useful for BizTalk Server database integration because data flows through BizTalk Server in the form of XML documents. When executing XML queries using ADO, be aware of the following: - You use the ADO Command object to execute XML query templates. An ADO Command object supports three dialects: Transact-SQL query, XML template query, and XPath query. The default dialect is Transact-SQL. To tell the SQLOLEDB provider that the submitted query is an XML template query, you need to set the Dialect property of the Command object to the globally unique identifier (GUID) value {5D531CB2-E6Ed-11D2-B252-00C04F681B71}. Other GUIDs are {C8B521FB-5CF3-11CE-ADE5-00AA0044773D} for a Transact-SQL query and {EC2A4293-E898-11D2-B1B7-00C04F680C56} for an XPath query. - To receive the XML results, you need to use the ADO Stream object. Open the Stream object and assign it to the CommandStream property of the ADO Command object. (You can access the CommandStream property through the Properties collection of the Command object.) Note that the CommandStream property is a provider-specific property and is supported only by the SQLOLEDB provider. - To retrieve data as an XML document using ADO, you can use a FOR XML clause in an XML query template that contains a reference to the Microsoft XML-SQL namespace. You then assign the XML query template to the CommandText property of the ADO Command object. You must also specify how the resulting XML fragment should be rendered as a well-formed XML document. The SQL Server 2000 OLE DB provider will use the root element of the XML query template as the root element in the resulting XML document. The following is a sample XML query template: - To insert, update, or delete data using XML instead of a Transact-SQL query, you can use the OPENXML clause in the XML query template, or use the Updategram and Bulk Load features provided in SQLXML 3.0. The second part of this paper, Integration with SQL Server, contains several samples that demonstrate how to access data by using the XML query and ADO. Leveraging SAX2 for Large Data Files The Simple API for XML (SAX) is an interface that allows you to write applications or application components to read data in an XML document. The SAX2 implementation, the most recently released version of SAX, provides both Microsoft Visual Basic and Microsoft Visual C++® interfaces. All examples in this section are presented in Visual Basic. The XML Document Object Model (DOM) is one of the most commonly used technologies for processing XML documents. While XML DOM works well for smaller XML documents, it becomes less efficient when handling large documents. This is due to the fact that the DOM needs to break an XML document into individual objects (including elements, attributes, and comments) and create the entire tree structure in memory before the document can be manipulated. SAX offers a simpler, faster, lower-overhead, and more memory-efficient alternative to the DOM for processing XML documents. A SAX parser does not load an entire XML document into memory. Instead, it starts parsing at the beginning of a document and generates events as it encounters the various elements in the file. As a result, SAX works much better with larger documents, or with documents in which you want to perform a single operation, such as a search. The distinction between SAX and the DOM is best illustrated by comparing them to the traditional database cursor. SAX is similar to a traditional serial cursor providing read-only and forward-only, while the DOM is representative of a standard database cursor that allows random traversal and both read and write updates. Best uses of SAX SAX is best used in the following situations: - When your documents are large. The biggest advantage of SAX is that it requires significantly less memory than the DOM to process an XML document. With SAX, memory consumption does not increase with the size of the file. If you need to process large documents, SAX is the better alternative, particularly if you do not need to change the contents of the document because it has already been mapped by BizTalk Server. - When you need to stop parsing or document processing. SAX allows you to stop processing at any time. Due to the nature of the data stream, you can create applications that fetch a specific piece of data and then stop processing the file. As a result, the resources required to perform the operation are reduced. - When you want to retrieve small amounts of information. Many XML-based solutions require that you retrieve a specific piece of information or data element. It is not necessary to read the entire document to achieve the desired results. With SAX, your application can scan the data stream for specific contents. After the required data component is isolated, it can be passed on as a smaller document. Limitation of SAX The limitation of SAX is that it provides no random access to the document. Because the document is not in memory and the data is presented as a stream, you must handle data in the order in which it is processed. Sample: using SAX The following sample demonstrates a way to integrate SAX with BizTalk Server through an application integration component (AIC). The sample also shows the key performance benefits of using SAX to process both small (100–200 records) and large (10,000–20,000 records) XML files. In the sample a BizTalk Server receive function is used to collect XML documents from a predefined location, C:\Temp\SAXSample. The documents are then passed to a standard BizTalk Server channel and port, which are associated with a SAXSample AIC. The AIC uses SAX to sequentially process the XML documents, and uses ADO to insert the records into a SQL Server database table. The following illustration shows this process. Figure 1. The main steps for setting up the sample are: - Define a SQL Server database table - Register the AIC - Define a BizTalk Server port and channel - Define a BizTalk Server receive function - Run the sample These steps are summarized in the following paragraphs. Note All sample files for this section are included in the Samples\SAX2 directory. Step 1: define a SQL Server database table The SAX sample uses a Contacts table in the Pubs database to store data provided in the XML documents. The following script creates the required database table. The script is included in the file CreateContactsTable.sql in the Samples\SAX2\SQLScript samples directory. use pubs if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[Contacts]') and OBJECTPROPERTY(id, N'IsUserTable') = 1) drop table [dbo].[Contacts] Go CREATE TABLE [dbo].[Contacts] ( [CompanyCode] [char] (20), [Name] [varchar] (50), [Tel] [varchar] (50), [Email] [varchar] (50), [RecordTimeStamp] [datetime] not NULL default (GetDate()) ) ON [PRIMARY] Go Using SQL Query Analyzer, define the database and table using the preceding script. Then verify that the table has been created correctly. Step 2: register the AIC - Click Start, point to Programs, point to Administrative Tools, and then click Component Services. The Component Services Microsoft Management Console (MMC) window appears. - In the Component Services console tree, expand Component Services, expand Computers, expand My Computer, and then click COM+ Applications. - Right-click COM+ Applications, point to New, and then click Application. The COM Application Install Wizard opens - On the Welcome to the COM Application Install Wizard page, click Next. - On the Install or Create a New Application page, click Create an empty application. In the Enter a name for the new application box, type SAXAIC. In the Activation type area, click Server application and then click Next. - On the Set Application Identity page, in the Account area, click Interactive user-the current logged on user, and then click Next. - Click Finish. - In the console tree, expand COM+ Applications and expand the new package you created, called SAXAIC. Click the Components folder. - Browse to the directory Samples\SAX2 and drag AIC4SAX2.dll to the Components folder of package SAXAIC in Component Services. - Close Component Services.Note The Visual Basic source code for AIC4SAX2.dll is provided in the same directory as AIC4SAX2.dll. Step 3: define a BizTalk Server port and channel - Open Windows Explorer and create the following new directory: C:\Program Files\Microsoft BizTalk Server\BizTalkServerRepository\DocSpecs\SAX2 - Copy the document specification file Contacts.xml from the Samples\SAX2\SAXSpecsandSamples directory to the new directory. - Click Start, point to Programs, point to Microsoft BizTalk Server 2002, and then click BizTalk Messaging Manager. - On the File menu, point to New, point to Messaging Port, and then click To an Application. - On the General Information page, in the Name box, enter SAXSamplePort, and then click Next. - On the Destination Application page, select Application and click New. The Organization Properties dialog box appears. - In the Organization Properties dialog box, click the Applications tab, click Add, and then enter SAXSampleApplication in the Name box. Click OK twice to return to the Destination Application page. - On the Destination Application page, under Application, select SAXSampleApplication from the drop-down list. In the Primary transport frame, click Browse. The Primary Transport dialog box appears. - In the Primary Transport dialog box, select Application Integration Component from the Transport type list and then click Browse. In the Select a Component dialog box, select AIC4SAX2 Class1 from the list. Click OK twice to return to the Destination Application page. Click Next twice to enter the Security Information page. - On the Security Information page, select Create a channel from this messaging port, and then select From a application from the Channel type list. Click Finish. The New Channel wizard appears. - On the General Information page, enter SAXSampleChannel as the channel name, and then click Next twice to enter the Inbound Document page. - On the Inbound Document page, click New. In the New Document Definition dialog box, type SAXSampleDoc in the Document definition name box. Select Document Specification and click Browse. In the Select a Component dialog box, double-click the SAX2 folder, click Contacts.xml, click Open, and click OK to return to the Inbound Document page. Click Next to enter the Outbound Document page. - On the Outbound Document page, click Browse, select SAXSampleDoc from the list, and then click OK. - Click Next twice, and then click Finish. Finally, close the BizTalk Messaging Manager console. Step 4: define a BizTalk Server receive function - Open Windows Explorer and create a new directory, C:\Temp\SAXSample, as the inbound directory. - Click Start, point to Programs, point to Microsoft BizTalk Server 2002, and then click BizTalk Server Administration. - In the console tree, expand Microsoft BizTalk Server 2002, expand BizTalk Server Group, and then click Receive Functions. - Right-click Receive Functions, point to New, and then click File Receive Function. The Add a File Receive Function dialog box appears. - In the Add a File Receive Function dialog box, type SAXSampleReceiveFunction in the Name box. Type *.xml in the File types to poll for box. Type C:\Temp\SAXSample in the Polling location box. - Click Advanced and click the Channel name drop-down list. Select SAXSampleChannel from the list. Click OK twice to finish the process. Step 5: run the sample - Place the sample document SAXSample_Small.xml in the inbound directory C:\Temp\SAXSample. Verify that the new records defined in the XML document are inserted in the Contacts table of the Pubs database. - Repeat step 1 for the document SAXSample_Large.xml and ensure that all records are inserted in the SQL Server database. Note The current logged-on user must be a member of the BizTalk Server Administrators group and must remain logged on while running this sample. Integration with SQL Server The focus of this part of the article is to discuss methods for integrating BizTalk Server and SQL Server 2000. These methods combine tools to create a way to obtain XML data from SQL Server in a highly scalable manner and with minimal coding effort. It is worth noting that these techniques are generally applicable to other database technologies. Overview of XML and SQL Server SQL Server 2000 introduces many built-in features for XML support. These include: - The ability to use HTTP publishing functionality - The FOR XML clause for querying database tables and receiving the results as an XML document - The OPENXML keyword for updating database tables from XML documents SQL Server 2000 can return the results of SELECT statements as XML documents. The SQL Server 2000 Transact-SQL SELECT statement supports the FOR XML clause, which returns an XML document instead of a relational result set. The OPENXML keyword allows an XML document to be treated in the same way as a table or view in the FROM clause of a Transact-SQL statement. This allows inserting, updating, or deleting data by using an XML document. The entire SQL Server XML functionality is implemented in SqlXml3.dll. The template files, annotated schema files, Extensible Stylesheet Language (XSL) files, and XPath queries are handled on the Internet Information Services (IIS) server. SqlXml3.dll translates the XPath queries against the annotated schema into SQL commands. SQL Server HTTP publishing To use the SQL Server HTTP publishing functionality, you must set up an appropriate virtual directory. You can do this by using the "Configure SQL XML Support in IIS" utility to define and register a new virtual directory. This utility is shipped with SQL Server 2000, and instructs IIS to create an association between the new virtual directory and an instance of Microsoft SQL Server. A limitation of this approach is that to access SQL Server by using HTTP, XML views of SQL Server 2000 databases must be defined by annotating XML-Data Reduced (XDR) schemas to map the tables, views, and columns associated with the elements and attributes of the schema. The XML views can then be referenced in XPath queries, which retrieve results from the database and return XML documents. This task is a tedious manual process that requires time and effort. There is no out-of-box tool available in SQL Server 2000 to automate this task. Also, this solution is based on the Internet Server Application Programming Interface (ISAPI) and is not scalable because querying SQL Server through ISAPI yields low throughput. The following illustration shows how HTTP requests are handled. Figure 2. You should also consider the potential limitation of this approach. Constructing a FOR XML clause to obtain a hierarchical XML document based on SQL relational tables/views with multiple joins is a time-consuming and error-prone process. Hierarchical XML documents can get very complicated because multiple tables/views must be joined to produce the desired XML document. Some FOR XML statements are more than 30 kilobytes (KB) long. Automating this process requires a tremendous amount of resources. Microsoft provides the following three ways to update SQL Server 2000 by using data from XML files: - The OPENXML keyword. OPENXML ships with SQL Server 2000. It is used for updating data in databases, and is natively supported by SQL Server 2000. - XML Updategrams. Updategrams ship with SQLXML 3.0. They give developers an XML-based approach to data modification. - XML Bulk Load. XML Bulk Load ships with SQLXML 3.0. It is for loading large amounts of XML data into a database. To load small amounts (typically 100 KB) of XML data into SQL Server, OPENXML and XML Updategrams are good choices. To load large amounts (typically 100 MB) of XML data, the XML Bulk Load feature is more efficient. XML Bulk Load is similar in functionality to the bulk copy (bcp) utility and the Transact-SQL BULK INSERT statement. Unlike bcp and BULK INSERT, which accept only tabular data representations, XML Bulk Load supports loading XML hierarchies into one or more database tables. Also unlike OPENXML and XML Updategrams, XML Bulk Load uses Microsoft XML Core Services (MSXML)—formerly called Microsoft XML Parser—to process data rather than parsing the entire XML dataset into memory before processing it. Using the streaming interface lets XML Bulk Load process datasets larger than 100 MB without running out of memory. Both XML Bulk Load and XML Updategrams are based on annotated schema; these ISAPI-based solutions must go through IIS to complete the task of interchanging XML with SQL Server. Therefore, they are not scalable due to the performance issue. XML Bulk Load has another limitation. When bulk-loading XML into multiple tables, the rule is that the XML document must include the primary/foreign key values. In practice, these key values are usually missing. For example, the XML purchase order file from customers (which contains Order and Order Details) usually does not contain primary/foreign key values such as OrderID. Using FOR XML and OPENXML SQL Server 2000 provides the FOR XML clause and the OPENXML keyword that allow an XML document to be translated into a data format used by a relational database and vice versa. These functions allow XML data to be retrieved from or inserted into a SQL Server table. FOR XML Traditionally, the ActiveX Data Objects (ADO) recordset has been widely used to retrieve data from relational databases. SQL Server 2000 extends the SELECT statement to enable the retrieval of data as an XML document through the FOR XML clause. To use the FOR XML functionality, simply append the keywords FOR XML to a SELECT statement. This indicates to the SQL Server query processor that you want the results to be returned as an XML stream instead of as a recordset. In addition to including the FOR XML keywords, you must also specify a mode to indicate the format of the XML that should be returned. This mode can be specified as RAW, AUTO, or EXPLICIT. For details about the mode, refer to the SQL Server Books Online in the MSDN library. Within the BizTalk Server context, the FOR XML clause provides a way to retrieve data as XML documents directly without coding. This greatly simplifies the integration between BizTalk Server and SQL Server. In practice, one of the challenges in using the FOR XML clause is to create an SQL query that will return an XML document that complies with a predefined XML document schema. In the Using SQL Server XML View Mapper section, we describe an approach that simplifies the task of creating complex FOR XML queries by using the SQL Server XML View Mapper tool. OPENXML The OPENXML keyword is used primarily to insert data directly from an XML document into database tables. Before the OPENXML functionality can be utilized, the XML document provided by BizTalk Server needs to be parsed, validated as XML, and mapped into a tree structure that represents the nodes and elements of the document. This is done through the sp_xml_preparedocument stored procedure. After the document has been prepared, the SQL Server OPENXML keyword is used to create an in-memory rowset from the data tree created by sp_xml_preparedocument. This rowset can be used anywhere a table or view is used and is therefore ideal for updating or inserting data using an UPDATE or INSERT statement. After the XML data has been updated or inserted into the database, the sp_xml_removedocument stored procedure should be executed to reclaim the memory used by the node tree. The OPENXML keyword offers the following benefits: - Efficiency is increased as a result of fewer network round trips. - The data tier is conceptually simple and easy to code. It no longer needs to be aware of the underlying database structure by sending XML to SQL Server as a single input parameter for a stored procedure. That knowledge now lies in the stored procedures that use OPENXML to map XML nodes to tables, rows, and columns. Process description Within the BizTalk Server context, an application integration component (AIC), COM component, or Windows Script Component (WSC) can be used to invoke the XML support in SQL Server. This allows an XML document to be processed through BizTalk Orchestration and BizTalk Messaging and inserted directly into a database. The following illustration shows the overall process. Figure 3. In the process, an XML document is passed to BizTalk Server by an AIC, a COM component, or a WSC. ADO is then used to invoke a custom stored procedure, supplying the XML document as a parameter. The custom stored procedure leverages the OPENXML functionality to insert the document into the database. Sample: using OPENXML This section presents a BizTalk Server implementation that inserts XML data into SQL Server 2000 by using the OPENXML keyword. The main steps are as follows: - Create a stored procedure for generating a SQL Server database table by using the OPENXML keyword and an XML document. - Create a stored procedure for inserting XML data into the database table by using the OPENXML keyword. - Create a WSC, BTS2SQLOPENXML.wsc, which will be called from BizTalk Orchestration. The WSC accesses the stored procedure to insert XML data into the SQL Server database. - Create a BizTalk Orchestration schedule, TestBTS2SQLOPENXML.skv, which passes an XML document to the BTS2SQLOPENXML.wsc component. The WSC inserts the XML data into the database table. Note All sample files for this section are included in the Samples\OpenXML directory. Step 1: define and test the CreateNewTableAndPopulate stored procedure The OPENXML keyword can be used to create and populate a new database table based on an XML document. The columns of the database table correspond to the elements or attributes of the XML document. Assume that we have the following purchase order XML document: We want to create a PurchaseOrders table in the Pubs database according to the preceding XML document. Using SQL Query Analyzer, define the Pubs database and use the following script, CreatePOTable.sql to create the PurchaseOrders table. The script is included along with the other SQL scripts for this section in the Samples\OpenXML\SQLScripts directory. if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[PurchaseOrders]') and OBJECTPROPERTY(id, N'IsUserTable') = 1) drop table [dbo].[PurchaseOrders] GO CREATE TABLE [dbo].[PurchaseOrders] ( [OrderID] [int] NULL , [CustomerID] [nchar] (5) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , [EmployeeID] [int] NULL , [OrderDate] [datetime] NULL ) ON [PRIMARY] GO Using SQL Query Analyzer, run the following script, CreatePOTableAndPopulate.sql to create a new stored procedure, CreatePOTableAndPopulate. ... CREATE PROCEDURE CreatePOTableAndPopulate @xmlOrder VARCHAR(2000) AS DECLARE @iTree INTEGER EXEC sp_xml_preparedocument @iTree OUTPUT, @xmlOrder If exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[PurchaseOrders]') and OBJECTPROPERTY(id, N'IsUserTable') = 1) drop table [dbo].[PurchaseOrders] SELECT * INTO PurchaseOrders FROM OPENXML(@iTree, 'Order', 1) WITH (OrderID INTEGER, CustomerID nCHAR(8), EmployeeID INTEGER, OrderDate DATETIME) EXEC sp_xml_removedocument @iTree ... Then run the stored procedure, TestCreatePOTableAndPopulate.sql, passing in an XML document as follows: After running the stored procedure, browse the Pubs database to verify that a new PurchaseOrders table has been created and the XML data has been inserted correctly. Step 2: define and test the InsertPurchaseOrder stored procedure The OPENXML keyword can also be used to insert XML data into columns of a SQL Server database table. In this case the XML data and the database table format must match each other. For example, we want to insert the following XML data into the PurchaseOrders table created in step 1: We can create the following stored procedure (InsertPurchaseOrder.sql), InsertPurchaseOrder, to perform the task: ... CREATE PROCEDURE InsertPurchaseOrder @xmlOrder VARCHAR(2000)AS DECLARE @iTree INTEGER EXEC sp_xml_preparedocument @iTree OUTPUT, @xmlOrder INSERT PurchaseOrders (OrderID, CustomerID, EmployeeID, OrderDate) SELECT * FROM OPENXML(@iTree, 'Order', 1) WITH ( OrderID INTEGER, CustomerID nCHAR(8), EmployeeID INTEGER, OrderDate DATETIME) EXEC sp_xml_removedocument @iTree ... We then test the InsertPurchaseOrder procedure by using SQL Query Analyzer and the following script, TestInsertPurchaseOrder.sql: After running the preceding script, verify that the XML data has been correctly inserted into the PurchaseOrders table. Step 3: create and test the BTS2SQLOPENXML.wsc component To access the SQL Server stored procedure from BizTalk Orchestration, we can either use a WSC, a COM component, or an AIC. In this sample we use a WSC, BTS2SQLOPENXML.wsc, to run the InsertPurchaseOrder stored procedure. This WSC has one method, ExecSQL_Param, which takes three parameters—a connection string, an SQL query string, and an XML document string. The following code implements the WSC: <?xml version="1.0"?> <component> <?component error="true" debug="true"?> <registration description="BTS2SQLOPENXML" progid="BTS2SQLOPENXML.wsc" version="1.00" classid="{94f80128-269a-4220-bd77-c21abaca4ed3}"> </registration> <public> <method name="ExecSQL_Param"> <PARAMETER name="sCOnn"/> <PARAMETER name="sSQL"/> <PARAMETER name="sXML"/> </method> </public> <script language="VBScript"> <![CDATA[ function ExecSQL_Param(sConn, sSQL, sXML) Dim cn Dim cmd 'Connect to DB and run SQL Set cn = CreateObject("adodb.Connection") cn.Open sConn 'Execute procedure Set cmd = CreateObject("adodb.command") cmd.ActiveConnection = cn cmd.CommandText = sSQL & " '" & sXML & "'" cmd.Execute 'Clean up set cmd = nothing set cn = nothing ExecSQL = "" end function ]]> </script> </component> After using Notepad to define the .wsc file, save it as BTS2SQLOPENXML.wsc to the Sample\OPENXML folder. Register the component by using regsvr32.dll or by right-clicking the .wsc file in Windows Explorer and clicking Register. Now test the WSC by using the following Visual Basic Scripting Edition (VBScript) code. Open Notepad, enter the code, and save the file as TestBTSOPENXMLInsertData.vbs. Dim o Dim sXML" Set o = CreateObject("BTS2SQLOPENXML.wsc") o.ExecSQL_Param "Provider=sqloledb; Data Source=(local); Initial Catalog=Pubs; Trusted_Connection=Yes;", "exec InsertPurchaseOrder", sXML MsgBox "Job done." Set o = nothing Test the activation and execution of BTS2SQLOPENXML.wsc by running TestBTSOPENXMLInsertData.vbs, and verify that a new row has been added to the PurchaseOrders table in the Pubs database. Step 4: define and test a BizTalk Orchestration schedule Now we will build a simple BizTalk Orchestration schedule that passes an XML document to the WSC, which then calls the stored procedure to insert the XML data into the SQL Server database table. The BizTalk Orchestration schedule has only one action, as shown in the following illustration. The schedule instantiates the BTS2SQLOPENXML.wsc component and calls the ExecSQL_Param method, passing in the three parameters required by the method. One of the parameters is the XML document to be inserted into the SQL Server database. Figure 4. Click thumbnail for larger image. The main steps for building the sample orchestration schedule are: - Drag an Action shape onto the process and name it Test SQL OPENXML. Add an End shape to the flowchart, and then complete the flowchart as shown in the illustration. - Bind the BTS2SQLOPENXML.wsc to the Action shape by using the Windows Script Component Wizard. - Add three constants in the orchestration data page. Details of the constants are defined in the following table. The XMLString constant contains the XML document to be inserted into the database. - Connect each constant to the corresponding ExecSQL_Param_In message fields by drawing a line from the constant to the input parameter. The following illustration shows the data page of the sample schedule. - Save the BizTalk Orchestration schedule as TestBTS2SQLOPENXML.skv to the Sample\OPENXML folder. Compile the schedule to generate the executable file TestBTS2SQLOPENXML.skx. Save the file to the same folder. To test the orchestration schedule, we can create a .vbs file to launch the schedule as we did in previous sections. We can also use the XLANGMon.exe utility to launch the schedule. Here we use the latter approach for testing. To test the orchestration schedule - Open Windows Explorer and browse to the folder C:\Program Files\Microsoft BizTalk Server\SDK\XLANG Tools. Double-click XLANGMon.exe. The XLANG Event Monitor window appears. - Right-click the TestBTS2SQLOPENXML.skx file and drag it onto the XLANG Scheduler node. This will run the orchestration schedule. - Verify that a new row of data has been inserted into the PurchaseOrders table in the Pubs database. Using Updategrams SQLXML 3.0 includes a major feature, the Updategram, which allows changes to an XML document as database inserts, updates, and deletes. An XML Updategram can be used as the source for a command against the Microsoft OLE DB Provider for SQL Server 2000. The Updategram is a way to specify an update to a SQL Server 2000 database through XML. You specify what the XML data looks like now and what you want it to look like when the Updategram is executed. The Updategram processor automatically generates and executes the SQL statements required to produce the desired change. Inserts, updates, and deletes can be specified with Updategrams. The Updategram uses annotated schemas to map XML data to database tables. The annotated schema was introduced in SQL Server 2000 and is supported by BizTalk Server. Leveraging the power of annotated schemas, the Updategram is more flexible and versatile to use than the OPENXML keyword. Updategrams are submitted for processing through the same mechanisms as all XML SQL templates; that is, they are posted to ISAPI, read from a file specified in a URL, or submitted with an ADO or OLE DB command. To perform standard SQL commands on the database, the following rules are applied to Updategrams: - If a record instance appears only in the <before> block with no corresponding instance in the <after> block, the Updategram performs a DELETE operation. - If a record instance appears only in the <after> block with no corresponding instance in the <before> block, the Updategram performs an INSERT operation. - If a record instance appears in the <before> block and has a corresponding instance in the <after> block, it is an UPDATE operation. In this case, the Updategram updates the record instance to the value specified in the <after> block. Updategrams and annotated schemas An Updategram is based on the xml-updategram namespace and contains one or more sync elements. Each sync element represents a transactional unit of database modifications. Updategrams describe a desired change by specifying what the relevant portion of an XML document looks like before and after the change is made. The mechanism for specifying the change is a <sync> block. Each of these update blocks defines a group of changes that are treated as an atomic unit of work. In terms of atomicity the <sync> block defines the transaction scope for an update. A <sync> block consists of a <before> block and an <after> block as follows: - The <before> block contains the data values that currently exist in an XML document, and is similar to the WHERE clause in a standard SQL statement. As in the SQL WHERE clause, the before information is used to locate which data is to be updated - The <after> block contains the new or updated values for the relevant data fields in the database. The following is an example of an Updategram: Because an Updategram is an XML document, it can easily be submitted to BizTalk Server as an interchange, and then processed to perform the required database changes. Sample: using Updategrams The use of Updategrams is incorporated in the example in the next section. Using SQL Server XML View Mapper Microsoft SQL Server XML View Mapper, or XML View Mapper, is a tool that automates the process of generating annotated schema for retrieving XML data from SQL Server. XML View Mapper, together with other SQL Server utilities, is used to create XML schemas and database queries that retrieve XML data directly from SQL Server. XML View Mapper is also a useful tool for integrating BizTalk Server and SQL Server. Using XML View Mapper to generate an annotated schema XML View Mapper is a mapping tool that relates XML-Data Reduced (XDR) schemas and SQL Server database schemas to generate an XML annotated schema. The XML annotated schema enables SQL Server to interact with the database based on established XDR schemas. Within XML annotated schemas, a set of predefined annotations is used to define the links and relationships between elements and attributes in the XDR schema and tables and fields in a database. XML annotated schemas can be created by using a text editor, but the process is tedious, error prone, time consuming, and difficult to debug. XML View Mapper provides a simple, visual, declarative, and integrated environment for defining XML views on a database. No coding is required to generate annotated mapping schemas, and utilities are available to validate, test, and export the schemas that are generated. The annotated schemas can be used to support ad hoc queries. They can also be used as the basis of an application that retrieves XML data from a SQL Server database by using the XPath navigation language. Getting XML data through an annotated schema requires IIS. This approach has a significant performance disadvantage when the volume of data exchange is high. The problem can be overcome by using a stored procedure against SQL Server to generate a targeted XML file instead of passing the annotated schema through the IIS server. This stored procedure contains the FOR XML EXPLICIT clause in the SELECT statement to obtain XML data directly from SQL Server. Retrieving XML from SQL Server The FOR XML EXPLICIT clause is one of the most efficient ways to retrieve XML data directly from a SQL Server database. It is, however, not an easy task to create SELECT . . . FOR XML EXPLICIT queries that return the desired XML data when the relationships between data tables become complex. XML View Mapper can be used to create annotated XML view schemas easily and visually. The XML view schemas generated in XML View Mapper are equivalent to SQL Server mapping schemas. After a schema is generated we can test the schema by executing an XPath query against the schema in the XML View Mapper environment. We then obtain the complex FOR XML query by using the SQL Profiler tool. The following steps summarize the process: - Use XML View Mapper to create an annotated mapping schema for an XDR schema and a database. In practice, you usually get the XDR schema based on the target XML document from BizTalk Server. - Test the mapping schema by using the XPath query tool in the XML View Mapper environment. - Capture the SELECT . . . FOR XML EXPLICIT query by running a SQL Profiler trace and the XPath query. - Extract the SELECT . . . FOR XML EXPLICIT query and insert it into a stored procedure or an ADO query. - Incorporate the stored procedure or the ADO query into a BizTalk Server application through an AIC or WSC. The following illustration shows the preceding steps. Figure 6. Sample: using XML View Mapper Before running this sample, you need to install XML View Mapper on your computer. This sample demonstrates using the Updategram and retrieving XML data by using a complex SELECT . . . FOR XML EXPLICIT query from the Northwind database. The query is obtained by using the XML View Mapper tool and SQL Profiler. Both the Updategram and the FOR XML query are created and stored in .xml files. A WSC component and a BizTalk Orchestration XLANG schedule are also generated for testing the BizTalk Server integration with the SQL Server database. Note All sample files for this section are included in the Samples\XMLViewMapper directory. Step 1: create an annotated mapping schema by using XML View Mapper If you have not used XML View Mapper, it is a good idea to go through the XML View Mapper tutorial before proceeding with this sample. Because the mapping schemas used in this sample are the same as those in the tutorial, we will bypass the process of creating the mapping schema and open the mapping provided in this sample package. - Browse to the folder Samples\XMLViewMapper and double-click OrderForm.smp. XML View Mapper is launched with the OrderForm project loaded. - In the Project Explorer window, expand Map Modules and double-click OrderForm-map. A mapping screen appears. - Expand the SQL Modules node, right-click Northwind, and then click Database Connection. The Data Link Properties dialog box appears. - Enter the name of the SQL Server computer that you will use and the log on information, and then click Test Connection. Ensure that the test connection is successful. - Click OK to close the dialog box. Step 2: test the schema by using XPath Query Tester XPath Query Tester is a utility that comes with XML View Mapper. XPath Query Tester uses the XDR view schema that is opened in XML View Mapper. The results of the query are returned using the element hierarchy specified in the XDR view schema, enclosed in a <root> element. Use the following procedure to test the mapping schema: - On the XML View Mapper Tools menu, click XPath Query Tester. The Schema Load Log dialog box appears. - Click OK. The XPath Query Tester input dialog box appears. If the database has not been connected, you will be prompted with a message box. Click OK to enter the Data Link Properties dialog box, enter the SQL Server name and logon password, and then click OK. The XPath Query Tester input dialog box appears. - Type the query Order[@OrderID="10248"] in the XPath Query box, but do not click Execute yet because we need to run SQL Profiler to capture the SQL query string. - Click Start, point to Programs, point to Microsoft SQL Server, and then click Profiler. - Start a new trace in the SQL Profiler screen and accept all the defaults for the new trace. - Click Execute in XPath Query Tester to trigger the SQL trace. The XML data will be retrieved and displayed. Step 3: get the FOR XML EXPLICIT query and create an XML query - Stop the trace. In the SQL Profiler window, find and select the application name SQL Server XML Mapper. The trace result shows the desired SELECT . . . FOR XML EXPLICIT command that was generated by the XPath query. - Open Notepad and enter the following text: - Cut and paste the trace result from the SQL Profiler window to replace the BODY text in the Notepad file. - Save the Notepad file as C:\RetrieveXMLOrder.xml. This XML query file will be used in the sample to retrieve order data (OrderID = 10248) as an XML document. By globally replacing the OrderID "10248" in the query with another OrderID, the query can be used to retrieve details of another order. - Close all open application screens. Step 4: create an Updategram - Open Notepad and type the following Updategram: <?xml version="1.0"?> <updateorder xmlns: <updg:sync> <updg:before> <_x005B_Order_x0020_Details_x005D_ OrderID="10249" ProductID="14" /> </updg:before> <updg:after> <_x005B_Order_x0020_Details_x005D_ UnitPrice="$15.88" /> </updg:after> </updg:sync> </updateorder> This Updategram updates the unit price for order item OrderID = 10249 and ProductID = 14. It is equivalent to the following Transact-SQL statement: - Save the file as C:\UpdateOrder10249.xml. Step 5: create a WSC component A WSC component, XMLOrder.wsc, has been created and provided with the sample in the Samples\XMLViewMapper folder. You need to register the component by using regsvr32.dll or by right-clicking the WSC file in Windows Explorer and clicking Register. The WSC component has three methods: - UpdateOrderDetail(UpdategramFile). Executes the Updategram contained in the file UpdateOrder10249.xml - GetXMLOrder(OrderID, XMLQueryFile). Retrieves the XML order for a given OrderID using the XML query contained in the file RetrieveXMLOrder.xml - WriteToFile(Document, FileName). Writes the retrieved data to a text file specified in the FileName parameter Step 6: create a BizTalk Orchestration schedule A BizTalk Orchestration schedule will be created that performs three tasks: - Run an Updategram to modify the unit price for order item OrderID=10249 and ProductID=14 - Retrieve all order items for OrderID=10249 by using a SELECT . . . FOR XML EXPLICIT query - Write the retrieved XML data into a file The following illustration shows the BizTalk Orchestration schedule. Figure 7. Click thumbnail for larger image. The main steps for building the sample orchestration schedule are: - Drag three Action shapes onto the process and name them Updategram, Retrieve XML Data, and Write Document to File. Add an End shape to the flowchart, and then connect the Action shapes to complete the business process flowchart as shown in the preceding illustration. - Bind XMLOrder.wsc to the Action shapes by using the Windows Script Component Wizard. - Add four constants on the orchestration data page. The following illustration shows details of the constants. Figure 8. - On the data page, connect the constants to the corresponding input parameters of the three data tables. The following illustration shows the completed data flow page. Figure 9. Click thumbnail for larger image. - Save the orchestration schedule to the Sample\XMLViewMapper folder as TestXMLOrder.skv. - Compile the orchestration schedule to generate the executable file TestXMLOrder.skx. - Save the file to the Sample\XMLViewMapper folder. Step 7: run the sample - Click Start, point to Programs, point to Microsoft SQL Server, and then click Enterprise Manager. Ensure that the item [OrderID=10249, ProductID=14] in the Order Details table in the Northwind database has a unit price different from 15.88 (this is the value to be changed by the Updategram in the orchestration). Alter the value if it already has a value of 15.88. - Open Windows Explorer and browse to the folder C:\Program Files\Microsoft BizTalk Server\SDK\XLANG Tools\. Double-click XLANGMon.exe. The XLANG Monitor screen appears. - Open Windows Explorer and browse to the TestXMLOrder.skx file. Right-click the file and drag it onto the XLANG Scheduler node in the XLANG Monitor screen. This runs the orchestration schedule. - Verify that a new XML file, C:\XMLOrderDetails.xml has been created. This file contains XML data retrieved by using the XML query in the file RetrieveXMLOrder.xml. The XML data represents the order with OrderID=10249. - Verify that the unit price for the order item [OrderID=10249, ProductID=14] has been modified to 15.88. Leveraging Stored Procedures and DTS in SQL Server 2000 This section describes how to use BizTalk Server 2002 to orchestrate workflow among disparate processes. By using the BizTalk Server 2002 Orchestration Designer, you can sequence tasks implemented in SQL Server stored procedures and Data Transformation Services (DTS) packages. Sample: calling stored procedures from BizTalk Orchestration This section demonstrates how to call SQL Server stored procedures from BizTalk Orchestration. Because of the number of components involved, we will use a structured example. The following steps are outlined: - Define a SQL Server database table and a stored procedure to call from BizTalk Orchestration. - Use the Windows Script Component Wizard to create a component file, BTS2SQL.wsc, which encapsulates the script code to access SQL Server. This component will have one method, ExecSQL, which will be called from BizTalk Orchestration. - Create a BizTalk Orchestration schedule, TestSQL.skv, which has one action. This action will be bound to an implementation that calls the ExecSQL method in the BTS2SQL.wsc component. - Run the sample. Note All sample files for this section are included in the Samples\StoredProcsandDTS\BTS2SQL directory. Step 1: define and test the LogIt stored procedure Dependencies for the sample include a SQL Server stored procedure to call and a mechanism for verifying that the procedure was called. Specifically this includes: - SQL Server database table—WorkflowMessages - SQL Server stored procedure—LogIt The following script, CreateTableAndLogIt.sql creates the required database table and stored procedure in Pubs, and is included in the Samples\StoredProcsandDTS\BTS2SQL directory: use pubs go create table WorkflowMessages ( message varchar(1024) not null, logged datetime not null default (GetDate()) ) go Create PROCEDURE LogIt @strMsg varchar(1024) AS insert into WorkFlowMessages (message) values (@strMsg) grant select on WorkflowMessages to public grant execute on LogIt to public go - Using SQL Query Analyzer, define the databases, table, and stored procedure by using the preceding script. - Using SQL Query Analyzer, run the LogIt stored procedure, passing in a string as a parameter: - Browse the WorkflowMessages table to verify that the procedure worked properly. Step 2: define and test the BTS2SQL component To access SQL Server from BizTalk Orchestration, we will define a Windows Script Component, BTS2SQL.wsc, with one method, ExecSQL. This method takes two parameters, a connection string and an SQL query. It uses ADO to connect to the database and execute the query. The following code implements the script component. The shell was generated with the Windows Script Component Wizard. The ExecSQL code was added by hand. <?xml version="1.0"?> <component> <?component error="true" debug="false"?> <registration description="BTS2SQL" progid="BTS2SQL.WSC" version="1.00" classid="{71f80b28-2695-4220-bd77-c21abaca02cb}" > </registration> <public> <method name="ExecSQL"> <PARAMETER name="sConn"/> <PARAMETER name="sSQL"/> </method> </public> <script language="VBScript"> <![CDATA[ function ExecSQL(sConn, sSQL) Dim cn Dim cmd 'Connect to the db and execute the SQL Set cn = CreateObject("ADOdb.connection") cn.Open sConn Set cmd = CreateObject("ADOdb.command") cmd.ActiveConnection = cn cmd.CommandText = sSQL cmd.Execute 'Clean up Set cmd=nothing Set cn=nothing ExecSQL = "" end function ]]> </script> </component> - Define the WSC file and add the code to the ExecSQL method. - Register the component by using regsvr32.dll or by right-clicking the WSC file in Windows Explorer and clicking Register. - Now we can test the component to ensure that it is functioning properly. The following VBScript code will test the component. Using Notepad, enter the following code and save the file as TestBTS2SQL.vbs: - Test the activation and execution of BTS2SQL.wsc by running TestBTS2SQL.vbs and then verifying that a row has been added to the WorkflowMessages table in the Pubs database Step 3: create and test the BizTalk Orchestration schedule A simple BizTalk Orchestration that has one action is suitable. This schedule should direct the run-time environment to instantiate a BTS2SQL.wsc component and call the ExecSQL method, passing in parameter values defined in the ExecSQL message. In this test, the values defined for the parameters (sConn and sSQL) on the data page within Orchestration Designer are connected to the values of the parameters in the ExecSQL message. When creating the sample orchestration, the following points are important to making the sample functional: - Make sure that the Implementation shape is attached to the BTS2SQL.wsc component. This can be done by using the Script Component Binding Wizard. - Two constants should be included on the orchestration data page: ConnectString and SQLString. These should be mapped to the ExecSQL_in message by drawing a line from the constant to the input parameter. The result should look like the included sample in the TestSQL.skv file and the compiled form in TestSQL.skx. Step 4: Run the BizTalk Orchestration schedule The following VBScript code is used to run the XLANG schedule. The code is provided in the sample as TestSQLSKV.vbs. To test the component from the BizTalk Orchestration schedule, perform the following steps: - In Windows Explorer, double-click TestSQLSKV.vbs. - After the message box appears, click OK and then go into SQL Enterprise Manager or SQL Query Analyzer to verify that Sample Insert from BizTalk was inserted in the WorkFlowMessages database table. Sample: calling DTS from BizTalk Orchestration One of the more efficient ways to access DTS packages from outside SQL Server Enterprise Manager is through the SQL Server DTS package object model. This object model ships with SQL Server 2000 and is a complete set of objects to define and run DTS packages through COM interfaces. To demonstrate how to call DTS packages from BizTalk Orchestration, the following steps will be outlined in this section: - Define a DTS package named Copy Titles that copies data from one table to another. - Use the Windows Script Component Wizard to create a component file, BTS2DTS.wsc, which encapsulates the script code to access DTS. This component will have one method, RunDTSPackage, which will be called from BizTalk Orchestration. - Create a BizTalk Orchestration schedule, TestDTS.skv, which has one action. This action will be bound to an implementation that calls the RunDTSPackage method in the BTS2DTS.wsc component. - Create a VBScript file to run the BizTalk Orchestration schedule. - Test the BizTalk Server to SQL Server communication. Note All sample files for this section are included in the Samples\StoredProcsandDTS\BTS2DTS directory. Step 1: define and test the DTS package For testing and demonstration purposes, a DTS package is required to run along with a mechanism for verifying that it was called. This package will be defined as Copy Titles. Functionality provided by the Copy Titles package will duplicate rows from the Titles table of the Pubs database to the Northwind database. The package will have two connection objects, one pointing to the Pubs database and the other pointing to the Northwind database. The package will also have two tasks. First, a Transform Data task will copy the data from the Pubs database to the Northwind database, and then an Execute SQL task will log an entry into our WorkFlowMessages table in the Pubs database to indicate that the data was copied. Perform the following steps to create the package: - Click Start, point to Programs, point to Microsoft SQL Server, and then click Enterprise Manager. - In the tree control in the left pane, expand Microsoft SQL Servers, expand SQL Server Group, expand [your computer name], expand Data Transformation Services, and then click Local Packages. - In the right pane, right-click and then click New Package. - Create a Microsoft OLE DB Provider for SQL Server and use the Connection Properties to name it Pubs and connect it to the Pubs database. - Repeat the preceding steps to create a second Microsoft OLE DB Provider for SQL Server object, name it Northwind, and connect it to the Northwind database. - Drag a Transform Data task to the design, where Pubs is the source and Northwind is the destination. On the property page, set the Source tab to the Titles table, the Destination tab to the Titles table (which might need to be created), and use the Transformation tab to select all records. - Drag an Execute SQL task to the design surface. On the property page, select the Pubs database, enter Log the Count for the description, and enter the following code as the SQL statement. This will count the number of rows in the Titles table in the Northwind database and write a message to the WorkFlowMessages table in the Pubs database. - Click OK to close the property page. - From the design surface, select the Northwind connection object and the Log the Count task. On the Workflow menu, click On Completion. - Save the package in SQL Server, as Copy Titles. - For testing, run the package by clicking Execute on the Package menu. After it is complete, go into SQL Query Analyzer and verify that a message was written to the WorkFlowMessages table and that the Titles table in the Northwind database is populated. Each time you run the package, you should see another message in the WorkFlowMessages table and the size of the Titles table in the Northwind database should grow. Step 2: define and test the BTS2DTS component To access DTS from BizTalk Orchestration, we will define a Windows Script Component that encapsulates the code to run DTS tasks. We will define a component named BTS2DTS.wsc with one method, RunPackage. This method takes one parameter, the name of the package to run. The following code implements the script component. The shell was generated with the Windows Script Component Wizard. The RunPackage code was added by hand. The only part of this script you might need to change is the UseTrustedConnection property on the objExecPkg object. If you typically use a user name and password to gain access to SQL Server, it might be more convenient to set the UseTrustedConnection property to False, and set the ServerUserName and ServerPassword properties of the objExecPkg object to the appropriate values. Similarly, if the instance of SQL Server is on another computer, set the ServerName property appropriately. <?xml version="1.0"?> <component> <?component error="true" debug="false"?> <registration description="BTS2DTS" progid="BTS2DTS.WSC" version="1.00" classid="{a9a7f917-35ef-4d45-93f4-3bc935ec75d0}" > </registration> <public> <method name="RunPackage"> <PARAMETER name="sPackageName"/> </method> </public> <script language="VBScript"> <![CDATA[ function RunPackage(sPackageName) Dim objPackage Dim objStep Dim objTask Dim objExecPkg 'Create the step and task. Specify the package to be run, and link the step to the task. Set objPackage = CreateObject("DTS.Package2") Set objTask = objPackage.Tasks.New("DTSExecutePackageTask") Set objExecPkg = objTask.CustomTask objExecPkg.UseRepository = False objExecPkg.UseTrustedConnection = True objExecPkg.PackageName = sPackageName objExecPkg.Name = "ExecPkgTask" Set objStep = objPackage.Steps.New objStep.TaskName = objExecPkg.Name objStep.Name = "ExecPkgStep" objStep.ExecuteInMainThread = True objPackage.Steps.Add objStep objPackage.Tasks.Add objTask 'Run the package objPackage.FailOnError = True objPackage.Execute 'Release references. Releases must be done before UnInitialize. Set objExecPkg = Nothing Set objTask = Nothing Set objStep = Nothing objPackage.UnInitialize end function ]]> </script> </component> To register and add code to the component - Define the WSC file and add the code to the RunPackage method. - Register the component by using regsvr32.dll or by right-clicking the WSC file in Windows Explorer and clicking Register. Step 3: create and test a BizTalk Orchestration schedule to exercise BTS2DTS To create this workflow in Orchestration Designer: - Draw the business process action ("Test DTS Package") by dragging the Action shape from the left palette to the design surface. Add an End shape to the design surface and sequence the three steps: Begin, Test DTS Package, End. - Drag the Script Component shape onto the implementation side and browse to BTS2DTS.wsc by using the Script Component Binding Wizard. - Connect the business process action to the implementation port ("Port_1") by using the Method Communication Wizard to create a new message and to make a synchronous method call. This schedule directs the run-time environment to instantiate a BTS2DTS.wsc component and call the RunPackage method, passing in parameter values defined in the RunPackage message. For this test, we will define values for the parameter (PackageName) on the data page of the Orchestration Designer and then connect the values to the parameters in the RunPackage message. - Go to the data page in Orchestration Designer and right-click the Constants block. From there, add one constant: - Map this constant to the RunPackage_in message by drawing a line from the constant to the input parameter. - This file should be saved as, and appears in the sample directory as, the TestDTS.skv file and is compiled into the TestDTS.skx file. Step 4: create a VBScript file to run the BizTalk Orchestration schedule The following VBScript code will request the XLANG runtime to start the orchestration. Sample file TestDTSSKV.vbs is provided. Step 5: test the BizTalk Server to SQL Server communication Now you are ready to test the component from the BizTalk Orchestration schedule. - In Windows Explorer, double-click TestDTSSKV.vbs. - After the message box appears, click OK, and then go into SQL Server Enterprise Manager or SQL Query Analyzer to verify that a message was inserted in the WorkFlowMessages table and the number of rows in the Titles table has increased. References Implement SAX2 Classes in VB. Malcolm, Graeme. Programming Microsoft SQL Server 2000 with XML. Redmond, WA: Microsoft Press, 2001. XML for SQL Documentation in Microsoft SQL Server 2000 Web Release 1 online documentation. XML View Mapper Documentation in Microsoft SQL Server XML View Mapper online documentation.
http://msdn.microsoft.com/en-US/library/ee265633(v=bts.10).aspx
CC-MAIN-2014-41
refinedweb
11,076
55.74
Graph Structure for Minimum Cut. More... #include <eg_min_cut.h> Graph Structure for Minimum Cut. Note that this structure also holds some parameters as the epsilon to use in the comparisons, the current best cut found (or bound), and the current cut found so-far. As well as an array containing all edges and nodes in thee graph (remember that when we Identify two nodes, we loose any reference to the shrinked node in the graph structure as discussed in EGsrkIdentifyNodes ) Definition at line 198 of file eg_min_cut.h. Array containing all edges of the graph. Definition at line 218 of file eg_min_cut.h. Array containing all nodes of the graph. Definition at line 217 of file eg_min_cut.h. Array storing the current cut, the size of this array should be at least EGsrkGraph_t::n_onodes Definition at line 214 of file eg_min_cut.h. Referenced by EGalgMCtestNode(), and main(). number of nodes in the current best cut, if set to zero, then no cut has been found (so far) Definition at line 210 of file eg_min_cut.h. Referenced by EGalgMCtestNode(). if EGalgMCgraph_t::cut_sz is not zero, then this is the value of the (currenlty) best minimum cut found so far. otherwise is a bound on the value of the minimum cut (note that this value should be set before actually computing the minimum cut, and can be set to the value of for some node v in the graph. Definition at line 202 of file eg_min_cut.h. Referenced by EGalgMCtestNode(), and main(). error tolerance used for equality testing Definition at line 201 of file eg_min_cut.h. Actual shrinking graph used Definition at line 200 of file eg_min_cut.h. Referenced by EGalgMCbuildPRgraph(), EGalgMCtestNode(), and main(). List of nodes in different levels of tests Definition at line 213 of file eg_min_cut.h.
http://www.dii.uchile.cl/~daespino/EGlib_doc/structEGalgMCgraph__t.html
CC-MAIN-2015-48
refinedweb
298
65.42
Today Couchbase is happy to announce the GA release of the official LINQ provider for Couchbase Server and the hot query language for JSON documents, N1QL! The goal of the provider is to provide a simple, easy to use ORM/ODM that is closer to Linq2SQL than EntityFramework or NHibernate, which are verbose and can be complex. While simplicity is the goal, don’t underestimate the power of Linq2Couchbase; it’s a fully functional Linq implementation with extended support for all of N1QL’s awesome features! In this post we will go over the basics of getting started with Linq2Couchbase, the major actors in the API, and integration with ASP.NET and Owin/Katana. In later posts we will go into more detail about specifics and details of Linq2Couchbase! The Architecture The provider is really just another layer over the SDK; the provider handles query parsing and query generation and the SDK handles the request and mapping of the results. The provider uses Re-linq internally to create an abstract syntax tree (AST) from the Linq query, which is then used to emit the N1QL statement. Note that Re-linq is used by both NHibernate and the EntityFramework, so your in good hands! Getting Started The source is available on Github and the package is available on NuGet; if you use NuGet package manager to install Linq2Couchbase all dependencies, including the Couchbase.NET SDK will be handled for you. To install Linq2Couchbase using the NuGet package manager (assuming you have already created a Visual Studio project) open the package manager by right clicking the “Manage Nuget Packages” and searching for Linq2Couchbase or using the package manager command line: Once you have done this the project will have all the necessary dependencies. Next you’ll need to install Couchbase Server either locally or via VM’s. The download link for Couchbase Server is here. For the VM’s, use vagrants which uses Puppet and Vagrant to install a cluster of Couchbase servers. Make sure you install Couchbase 4.0! If you are using Vagrants, then provision up the cluster: Once you have a Couchbase Server or cluster, setup up the Server or Cluster and make sure at least one node is an Index node and one node is a Query node. You will do this on the the first step of the “Server Setup” or when you add an additional server to your cluster. Also, add the “beer-sample” data set to the cluster during setup or from the Settings>Samples tab after you have setup up the cluster or instance. Now that your Couchbase instance or cluster is setup, you will need to create a primary index in the “beer-sample” bucket. To do this either navigate C:Program FilesCouchbaseServerbin or if using vagrants (or linux) /opts/couchbase/binusing a command prompt. Then type either cbq or ./cbq (on linux) to start the query CIL and then: This will create a primary index on the beer-sample bucket. Note the backticks “`”, these are required to escape the “-” in beer-sample. Now you are ready to write some code! Creating the BucketContext The main object for working with the bucket is the BucketContext. The BucketContext is analogous to the DataContext in Linq2Sql and the DbContext in the EntityFramework. It’s primary purpose is to provide and interface for building queries and submitting them to the Couchbase server. You can use the BucketContext as a stand-alone object or you can derive from it and create a strongly typed object that maps properties to the types in your bucket and domain model. In this example I will do the latter: public class BeerSample : BucketContext { public BeerSample() : this(ClusterHelper.GetBucket(“beer-sample”)) { } public BeerSample(IBucket bucket) : base(bucket) { DocumentFilterManager.SetFilter(new BreweryFilter()); } public IQueryable Beers { get { return Query(); } } public IQueryable Breweries { get { return Query(); } } } The beer-sample bucket (a bucket is similar to a database in a RDBMS system) contains documents that are “typed” as a “brewery” and a “beer”, this informal type system will allow to query the bucket and return either brewery documents or beer documents via a predicate(WHERE type=”beer” for example). In the code above we have defined explicit properties which return IQueryable An Example Query You will find that using Linq2Couchbase is pretty much identical to Linq2SQL or the EF: Once you have a BucketContext reference all you do is query it like you would any other Linq provider. All Linq keywords are supported as well as N1QL constructs like ON KEYS, NEST and UNNEST! In a later post, we will go over all of this is much greater detail! The Document Model In the BeerSample context above, the Beer and Brewery objects will be the targets for our Linq projections and they correspond or map to equivalent JSON documents in our bucket (beer-sample). Here is a listing for each (note that this is partial listing, the classes in their entirety can be found here): This of course maps to “beer” documents; note that DocumentTypeFilter attribute. This will “auto-magically” add a predicate or WHERE clause that filters by the type “beer” to every query that targets that document. The DocumentTypeFilter attribute is one of two ways to apply a filter, unless you manually add the predicate to each query. This is the object that “brewery” documents will be mapped to. Note that there is no DocumentTypeFilter attribute explicitly defined; that is because the constructor for the BeerSample context will add the filter to the DocumentFilterManager. This is purely a different approach to the same problem; adding a predicate to a query to filter by type. Integrating with ASP.NET or Owin/Katana Their is a very distinct pattern for using the Couchbase .NET SDK in ASP.NET or Katana/OWin projects. Since the BucketContext uses the Couchbase .NET SDK, you will need to follow this pattern so that you have take advantage object caching within the SDK and shared TCP connections. Fortunately, its a very simple pattern: Using Global.asax in ASP.NET In an ASP.NET application using Global.asax, you will take advantage of the Application_Start and Application_End event handlers to create and destroy the Cluster and Bucket objects that the BucketContext depends on. Here we are creating the configuration (btw this could come from the App.Config as well) and then initializing the ClusterHelper object. Finally when the application shuts down, we are destroying the long-lived Cluster and Bucket objects in the Application_End handler. This will be a graceful shutdown and OS level constructs will be returned back to the OS in a timely manner. Using Setup.cs in Owin/Katana In Owin/Katana hosted applications, we follow a similar pattern, we just use a different method, the Setup.cs class. Here we create and initialize the ClusterHelper when the Configuration method runs at startup and then we register a delegate that will fire when the application shuts down, closing our ClusterHelper and freeing up resources. Injecting into your Controllers The BucketContext itself takes on the characteristics of the Unit of Work pattern; you can create one for each request and since the ClusterHelper manages the references (assuming you are following the advice above) the instance will simply be GC’d at the end of the request. The simplest way to do this is by simply using Dependency Injection (the pattern) within your Controllers to create the instance when the controller is created, for example: Now, you simply use the BucketContext within your Action methods: Once again, since the context a short-lived lightweight object, you could scope it to the request and inject it there reusing it for all controllers invoked within that request. What’s coming up? Very quickly, the next major feature will be Change Tracking using proxies. Additionally, expect bug fixes, performance enhancements and other featured aimed at making Linq2Couchbase a fully featured, lightweight ODM/ORM! If there is a feature you want, a bug fix or perhaps you want to contribute we welcome all kinds of feedback both good and bad. Linq2Couchbase is an community-driven, open source project, so please take a look at it and if you would like to contribute, please do! Special Thanks A special thanks to all of those who contributed to the project (it is Open Source after all!), especially to Brant Burnett of Centeredge Software who contributed significantly to the project and documentation on NuGet!
https://blog.couchbase.com/linq-n1ql-and-couchbase-oh-mai-linq2couchbase-ga/
CC-MAIN-2019-35
refinedweb
1,406
59.74
FAQs Search Recent Topics Flagged Topics Hot Topics Best Topics Register / Login Win a copy of Modern JavaScript for the Impatient this week in the Server-Side JavaScript and NodeJS forum! manasee patel Greenhorn 6 2 Threads 0 Cows since Nov 17, 2010 (2/10) Number Likes Received (0/3) Number Likes Granted (0/3) Set bumper stickers in profile (0/1) Set signature in profile Set a watch on a thread Save thread as a bookmark Create a post with an image (0/1) Recent posts by manasee patel HTTP method GET is not supported error when the JSP is making POST request to the servlet I have this JSP. The same JSP is to ask for data and display all the previously entered data. This JSP is making a POST request to servlet which put the data coming from parameters of each request in the ServletContext. The first time when I enter data and hit Submit button it works. But for second request there is "HTTP Status 405 - HTTP method GET is not supported by this URL" error. When I override doGet() or redirect the response instead of doing forward, it works. In JSP the method is POST. i.e every time JSP is making POST request to the servlet. So why there is that 405 error? The servlet Sample.java: //necessary imports public class Sample extends HttpServlet { public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { HashMap roomList = null; synchronized(getServletContext()) { roomList = (HashMap) getServletContext().getAttribute("roomList"); if(roomList == null) { roomList = new HashMap(); getServletContext().setAttribute("roomList", roomList); } } String roomName = request.getParameter("name"); String roomDescr = request.getParameter("description"); if(roomName != null && roomName.length() > 0) { synchronized(roomList) { roomList.put(roomName, new ChatRoom(roomName, roomDescr)); } } //response.sendRedirect(""); RequestDispatcher dispatcher = request.getRequestDispatcher("/ChatAdmin.jsp"); dispatcher.forward(request, response); } /*public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { doPost(request, response); }*/ } ------------------------- ChatAdmin.jsp Code Fragment <form method="POST" action="sample.do"> <% //Java code to extract the data from ServletContext (application) and display it %> </br></br> Name: <input type="text" name="name"/></br> Description: <input type="text" name="description"/></br> <input type="submit" value="Submit"/> ------------------------------------------------- web.xml: <servlet> <servlet-name>sample</servlet-name> <servlet-class>Sample</servlet-class> </servlet> <servlet-mapping> <servlet-name>sample</servlet-name> <url-pattern>/sample.do</url-pattern> </servlet-mapping> show more 8 years ago Servlets Interface variables I think I found the answer. It's here . show more 9 years ago Beginning Java Interface variables A member is marked static so that it will behave consistent regardless of the instance. Any how we can't instantiate interface. The variable of interface will be meaningful only when some class implements it. And that class can be instantiated. So why to make it static? What if the classes implementing the interface want to change the value of interface variable? Or why should not they want? As in abstract class the subclass can change the values of variables declared in the abstract superclass. Why is it problem for interfaces? show more 9 years ago Beginning Java j2ee download Thank You. But then from where i.e. from which directory will I compile my j2ee classes? I don't know glass fish server. When I installed it (what I considered j2ee but found to be a server) there was only a directory named glassfishv3 and in that there is no j2ee. Yes, in start menu there are options but all for that server. show more 9 years ago EJB and other Jakarta /Java EE Technologies j2ee download I want to download j2ee, the latest. I don't know is it 1.5 or 1.6? I want to compile my servlet, jdbc, ... files. I have tomcat configured and it is displaying the welcome page. So I want the link to download the thing required to compile j2ee files. What is it called? j2ee runtime? j2ee SDK? I already have downloaded java_ee_sdk-6-web-windows. But I guess it has only that glassfish server and not the thing required to compile j2ee files. I don't want any server or IDE coming along with it. I already have many of those plus my internet connection is slow. So what should I download? show more 9 years ago EJB and other Jakarta /Java EE Technologies
https://coderanch.com/u/235819/manasee-patel
CC-MAIN-2020-40
refinedweb
712
58.79
Silver. Many of the WCF services out there follow the simple request-response mechanism to exchange data which works well for many applications. However, in addition to standard HTTP bindings, WCF also supports several others including a polling duplex binding made specifically for Silverlight which allows a service to push data down to a client as the data changes. This type of binding isn't as "pure" as the push model available with sockets since the Silverlight client does poll the server to check for any queued messages, but it provides an efficient way to push data to a client without being restricted to a specific port range. Once a communication channel is opened messages can be sent in either direction. The Silverlight SDK states the following about how communication works between a Silverlight client and a duplex service: "The Silverlight client periodically polls the service on the network layer, and checks for any new messages that the service wants to send on the callback channel. The service queues all messages sent on the client callback channel and delivers them to the client when the client polls the service." When creating a WCF duplex service for Silverlight, the server creates a standard interface with operations. However, because the server must communicate with the client it also defines a client callback interface. An example of defining a server interface named IGameStreamService that includes a single service operation is shown next: [ServiceContract(Namespace = "Silverlight", CallbackContract = typeof(IGameStreamClient))] public interface IGameStreamService { [OperationContract(IsOneWay = true)] void GetGameData(Message receivedMessage); } This interface is a little different from the standard WCF interfaces you may have seen or created. First, it includes a CallbackContract property that points to the client interface. Second, the GetGameData() operation is defined as a one way operation. Client calls are not immediately returned as a result of setting IsOneWay to true and are pushed to the client instead. The IGameStreamClient interface assigned to the CallbackContract is shown next. It allows a message to be sent back to the client by calling the ReceiveGameData() method. [ServiceContract] public interface IGameStreamClient { [OperationContract(IsOneWay = true)] void ReceiveGameData(Message returnMessage); } Once the server and client contracts are defined a service class can be created that implements the IGameStreamService interface. The following code creates a service that simulates a basketball game (similar to the one I demonstrated for using Sockets with Silverlight) and sends game updates to a Silverlight client on a timed basis. using System; using System.ServiceModel; using System.ServiceModel.Channels; using System.Threading; namespace WCFPushService { public class GameStreamService : IGameStreamService { IGameStreamClient _Client; Game _Game = null; Timer _Timer = null; Random _Random = new Random(); public GameStreamService() { _Game = new Game(); } public void GetGameData(Message receivedMessage) { //Get client callback channel _Client = OperationContext.Current.GetCallbackChannel<IGameStreamClient>(); SendData(_Game.GetTeamData()); //Start timer which when fired sends updated score information to client _Timer = new Timer(new TimerCallback(_Timer_Elapsed), null, 5000, Timeout.Infinite); } private void _Timer_Elapsed(object data) { SendData(_Game.GetScoreData()); int interval = _Random.Next(3000, 7000); _Timer.Change(interval, Timeout.Infinite); } private void SendData(object data) { Message gameDataMsg = Message.CreateMessage( MessageVersion.Soap11, "Silverlight/IGameStreamService/ReceiveGameData", data); //Send data to the client _Client.ReceiveGameData(gameDataMsg); } } } The service first creates an instance of a Game class in the constructor which handles simulating a basketball game and creating new data that can be sent to the client. Once the client calls the service's GetGameData() operation (a one-way operation), access to the client's callback interface is retrieved by going through the OperationContext object and calling the GetCallbackChannel() method. The teams involved in the game are then created on the server and pushed to the client by calling the SendData() method. This method calls the Game object's GetTeamData() method. Although not shown here (but included in the sample code), the GetTeamData() method generates an XML message and returns it as a string. The SendData() method then creates a WCF Message object, defines that SOAP 1.1 will be used (required for this type of communication) and defines the proper action to be used to send the XML data to the client. The client's ReceiveGameData() operation is then called and the message is ultimately sent to the client. Once the client receives the team data the server will start sending simulated score data on a random basis. When the Timer object created in the initial call to GetGameData() fires the _Timer_Elapsed() method is called which gets updated score information and pushes it to the Silverlight client by calling the SendData() method.>. Just for the record...this type of HTTP push technique is often referred to as Comet. The real question is, how scalable is the server going to be? From everything I've been able to determine, it's pretty much impossible to build a scalable Comet server for IIS. This is due to the way connections and threads are handled. I've tried to get some information from Microsoft about this, but they don't have answers yet. Only time/experimentation will tell if the server is scalable enough to build a multi-player game or chat application. I'm hopeful, because the amount of work it takes to create a truly scalable socket server is tremendous. Rob, It's difficult to say how scalable this approach is since it's brand new and the server-side .dll is still in evaluation mode (the polling duplex feature doesn't have a go-live license at this point). As you mention, time will tell. Pingback from Silverlight news for June, 17 2008 Pingback from Silverlight and WCF Duplex Services | DavideZordan.net Silverlight Beta2: Going Full-Duplex! Nikolay Raychev on Cookies, Martin Mihaylov on the DeepZoom composer, Adam Kinney on Procedural Animation In Part 1 of this series on pushing data to a Silverlight client with a WCF polling duplex service I Good Stuff, Dan. I can feel a live Stock Charting SL app coming on... Pingback from first class client Pingback from Pushing Data to a Silverlight Client with a WCF Duplex Service ??? Part II @ ZDima.net Pingback from sample contract Based on the questions I received in the last period, I’d like to share here some links that will eventually Pingback from WCF Duplex client for Silverlight « Jean-Dirk Stuart’s Blog Pingback from SilverTrader: First SVN Check-in « Tales from a Trading Desk Silverlight 2 Networking Options weblogs.asp.net/.../silverlight-2-networking Last week I have been started to read about full duplex wcf and his implementation in silverlight applications Pingback from Silverlight Travel » Silverlight chat application using WCF full duplex .code{ word-wrap:break-word; margin:8px; padding:8px; border:1pxsolidblack; ... Pingback from Pushing Data to a Silverlight Client with a WCF Duplex Service - VnTopic
http://weblogs.asp.net/dwahlin/archive/2008/06/16/pushing-data-to-a-silverlight-client-with-wcf-duplex-service-part-i.aspx
crawl-002
refinedweb
1,116
52.9
The author selected the Open Internet/Free Speech fund to receive a donation as part of the Write for DOnations program. Introduction tutorial, you will prepare a dataset of sample tweets from the NLTK package for NLP with different data cleaning methods. Once the dataset is ready for processing, you will train a model on pre-classified tweets and use the model to classify the sample tweets into negative and positives sentiments. This article assumes that you are familiar with the basics of Python (see our How To Code in Python 3 series), primarily the use of data structures, classes, and methods. The tutorial assumes that you have no background in NLP and nltk, although some knowledge on it is an added advantage. Prerequisites Step 1 — Installing NLTK and Downloading the Data You will use the NLTK package in Python for all NLP tasks in this tutorial. In this step you will install NLTK and download the sample tweets that you will use to train and test your model. First, install the NLTK package with the pip package manager: This tutorial will use sample tweets that are part of the NLTK package. First, start a Python interactive session by running the following command: Then, import the nltk module in the python interpreter. Download the sample tweets from the NLTK package: - nltk.download('twitter_samples') Running this command from the Python interpreter downloads and stores the tweets locally. Once the samples are downloaded, they are available for your use. You will use the negative and positive tweets to train your model on sentiment analysis later in the tutorial. The tweets with no sentiments will be used to test your model. If you would like to use your own dataset, you can gather tweets from a specific time period, user, or hashtag by using the Twitter API. Now that you’ve imported NLTK and downloaded the sample tweets, exit the interactive session by entering in exit(). You are ready to import the tweets and begin processing the data. Step 2 — Tokenizing the Data Language in its original form cannot be accurately processed by a machine, so you need to process the language to make it easier for the machine to understand. The first part of making sense of the data is through a process called tokenization, or splitting strings into smaller parts called tokens. A token is a sequence of characters in text that serves as a unit. Based on how you create the tokens, they may consist of words, emoticons, hashtags, links, or even individual characters. A basic way of breaking language into tokens is by splitting the text based on whitespace and punctuation. To get started, create a new .py file to hold your script. This tutorial will use nlp_test.py: In this file, you will first import the twitter_samples so you can work with that data: nlp_test.py from nltk.corpus import twitter_samples This will import three datasets from NLTK that contain various tweets to train and test the model: negative_tweets.json: 5000 tweets with negative sentiments positive_tweets.json: 5000 tweets with positive sentiments tweets.20150430-223406.json: 20000 tweets with no sentiments Next, create variables for positive_tweets, negative_tweets, and text: nlp_test.py from nltk.corpus import twitter_samples positive_tweets = twitter_samples.strings('positive_tweets.json') negative_tweets = twitter_samples.strings('negative_tweets.json') text = twitter_samples.strings('tweets.20150430-223406.json') The strings() method of twitter_samples will print all of the tweets within a dataset as strings. Setting the different tweet collections as a variable will make processing and testing easier. Before using a tokenizer in NLTK, you need to download an additional resource, punkt. The punkt module is a pre-trained model that helps you tokenize words and sentences. For instance, this model knows that a name may contain a period (like “S. Daityari”) and the presence of this period in a sentence does not necessarily end it. First, start a Python interactive session: Run the following commands in the session to download the punkt resource: - import nltk - nltk.download('punkt') Once the download is complete, you are ready to use NLTK’s tokenizers. NLTK provides a default tokenizer for tweets with the .tokenized() method. Add a line to create an object that tokenizes') If you’d like to test the script to see the .tokenized method in action, add the highlighted content to your nlp_test.py script. This will tokenize a single tweet from]) Save and close the file, and run the script: The process of tokenization takes some time because it’s not a simple split on white space. After a few moments of processing, you’ll see the following: Output['#FollowFriday', '@France_Inte', '@PKuchly57', '@Milipol_Paris', 'for', 'being', 'top', 'engaged', 'members', 'in', 'my', 'community', 'this', 'week', ':)'] Here, the .tokenized() method returns special characters such as @ and _. These characters will be removed through regular expressions later in this tutorial. Now that you’ve seen how the .tokenized() method works, make sure to comment out or remove the last line to print the tokenized tweet from the script by adding a # to the start of the line:]) Your script is now configured to tokenize data. In the next step you will update the script to normalize the data. Step 3 — Normalizing the Data Words have different forms—for instance, “ran”, “runs”, and “running” are various forms of the same verb, “run”. Depending on the requirement of your analysis, all of these versions may need to be converted to the same form, “run”. Normalization in NLP is the process of converting a word to its canonical form. Normalization helps group together words with the same meaning but different forms. Without normalization, “ran”, “runs”, and “running” would be treated as different words, even though you may want them to be treated as the same word. In this section, you explore stemming and lemmatization, which are two popular techniques of normalization. Stemming is a process of removing affixes from a word. Stemming, working with only simple verb forms, is a heuristic process that removes the ends of words. In this tutorial you will use the process of lemmatization, which normalizes a word with the context of vocabulary and morphological analysis of words in text. The lemmatization algorithm analyzes the structure of the word and its context to convert it to a normalized form. Therefore, it comes at a cost of speed. A comparison of stemming and lemmatization ultimately comes down to a trade off between speed and accuracy. Before you proceed to use lemmatization, download the necessary resources by entering the following in to a Python interactive session: Run the following commands in the session to download the resources: - import nltk - nltk.download('wordnet') - nltk.download('averaged_perceptron_tagger') wordnet is a lexical database for the English language that helps the script determine the base word. You need the averaged_perceptron_tagger resource to determine the context of a word in a sentence. Once downloaded, you are almost ready to use the lemmatizer. Before running a lemmatizer, you need to determine the context for each word in your text. This is achieved by a tagging algorithm, which assesses the relative position of a word in a sentence. In a Python session, Import the pos_tag function, and provide a list of tokens as an argument to get the tags. Let us try this out in Python: - from nltk.tag import pos_tag - from nltk.corpus import twitter_samples - - tweet_tokens = twitter_samples.tokenized('positive_tweets.json') - print(pos_tag(tweet_tokens[0])) Here is the output of the Output[('#FollowFriday', 'JJ'), ('@France_Inte', 'NNP'), ('@PKuchly57', 'NNP'), ('@Milipol_Paris', 'NNP'), ('for', 'IN'), ('being', 'VBG'), ('top', 'JJ'), ('engaged', 'VBN'), ('members', 'NNS'), ('in', 'IN'), ('my', 'PRP$'), ('community', 'NN'), ('this', 'DT'), ('week', 'NN'), (':)', 'NN')] From the list of tags, here is the list of the most common items and their meaning: NNP: Noun, proper, singular NN: Noun, common, singular or mass IN: Preposition or conjunction, subordinating VBG: Verb, gerund or present participle VBN: Verb, past participle Here is a full list of the dataset. In general, if a tag starts with NN, the word is a noun and if it stars with VB, the word is a verb. After reviewing the tags, exit the Python session by entering exit(). To incorporate this into a function that normalizes a sentence, you should first generate the tags for each token in the text, and then lemmatize each word using the tag. Update the nlp_test.py file with the following function that lemmatizes a sentence: nlp_test.py ... from nltk.tag import pos_tag from nltk.stem.wordnet import WordNetLemmatizer def lemmatize_sentence(tokens): lemmatizer = WordNetLemmatizer() lemmatized_sentence = [] for word, tag in pos_tag(tokens): if tag.startswith('NN'): pos = 'n' elif tag.startswith('VB'): pos = 'v' else: pos = 'a' lemmatized_sentence.append(lemmatizer.lemmatize(word, pos)) return lemmatized_sentence print(lemmatize_sentence(tweet_tokens[0])) This code imports the WordNetLemmatizer class and initializes it to a variable, lemmatizer. The function lemmatize_sentence first gets the position tag of each token of a tweet. Within the if statement, if the tag starts with NN, the token is assigned as a noun. Similarly, if the tag starts with VB, the token is assigned as a verb. Save and close the file, and run the script: Here is the output: Output['#FollowFriday', '@France_Inte', '@PKuchly57', '@Milipol_Paris', 'for', 'be', 'top', 'engage', 'member', 'in', 'my', 'community', 'this', 'week', ':)'] You will notice that the verb being changes to its root form, be, and the noun members changes to member. Before you proceed, comment out the last line that prints the sample tweet from the script. Now that you have successfully created a function to normalize words, you are ready to move on to remove noise. Step 4 — Removing Noise from the Data In this step, you will remove noise from the dataset. Noise is any part of the text that does not add meaning or information to data. Noise is specific to each project, so what constitutes noise in one project may not be in a different project. For instance, the most common words in a language are called stop words. Some examples of stop words are “is”, “the”, and “a”. They are generally irrelevant when processing language, unless a specific use case warrants their inclusion. In this tutorial, you will use regular expressions in Python to search for and remove these items: - Hyperlinks - All hyperlinks in Twitter are converted to the URL shortener t.co. Therefore, keeping them in the text processing would not add any value to the analysis. - Twitter handles in replies - These Twitter usernames are preceded by a @symbol, which does not convey any meaning. - Punctuation and special characters - While these often provide context to textual data, this context is often difficult to process. For simplicity, you will remove all punctuation and special characters from tweets. To remove hyperlinks, you need to first search for a substring that matches a URL starting with http:// or https://, followed by letters, numbers, or special characters. Once a pattern is matched, the .sub() method replaces it with an empty string. Since we will normalize word forms within the remove_noise() function, you can comment out the lemmatize_sentence() function from the script. Add the following code to your nlp_test.py file to remove noise from the dataset: nlp_test.py ... import re, This code creates a remove_noise() function that removes noise and incorporates the normalization and lemmatization mentioned in the previous section. The code takes two arguments: the tweet tokens and the tuple of stop words. The code then uses a loop to remove the noise from the dataset. To remove hyperlinks, the code first searches for a substring that matches a URL starting with http:// or https://, followed by letters, numbers, or special characters. Once a pattern is matched, the .sub() method replaces it with an empty string, or ''. Similarly, to remove @ mentions, the code substitutes the relevant part of text using regular expressions. The code uses the re library to search @ symbols, followed by numbers, letters, or _, and replaces them with an empty string. Finally, you can remove punctuation using the library string. In addition to this, you will also remove stop words using a built-in set of stop words in NLTK, which needs to be downloaded separately. Execute the following command from a Python interactive session to download this resource: - nltk.download('stopwords') Once the resource is downloaded, exit the interactive session. You can use the .words() method to get a list of stop words in English. To test the function, let us run it on our sample tweet. Add the following lines to the end of the nlp_test.py file: nlp_test.py ... from nltk.corpus import stopwords stop_words = stopwords.words('english') print(remove_noise(tweet_tokens[0], stop_words)) After saving and closing the file, run the script again to receive output similar to the following: Output['#followfriday', 'top', 'engage', 'member', 'community', 'week', ':)'] Notice that the function removes all @ mentions, stop words, and converts the words to lowercase. Before proceeding to the modeling exercise in the next step, use the remove_noise() function to clean the positive and negative tweets. Comment out the line to print the output of remove_noise() on the sample tweet and add the following to the nlp_test.py script: nlp_test.py ... from nltk.corpus import stopwords stop_words = stopwords.words('english') #print(remove_noise(tweet_tokens[0], stop_words)))) Now that you’ve added the code to clean the sample tweets, you may want to compare the original tokens to the cleaned tokens for a sample tweet. If you’d like to test this, add the following code to the file to compare both versions of the 500th tweet in the list: nlp_test.py ... print(positive_tweet_tokens[500]) print(positive_cleaned_tokens_list[500]) Save and close the file and run the script. From the output you will see that the punctuation and links have been removed, and the words have been converted to lowercase. Output['Dang', 'that', 'is', 'some', 'rad', '@AbzuGame', '#fanart', '!', ':D', ''] ['dang', 'rad', '#fanart', ':d'] There are certain issues that might arise during the preprocessing of text. For instance, words without spaces (“iLoveYou”) will be treated as one and it can be difficult to separate such words. Furthermore, “Hi”, “Hii”, and “Hiiiii” will be treated differently by the script unless you write something specific to tackle the issue. It’s common to fine tune the noise removal process for your specific data. Now that you’ve seen the remove_noise() function in action, be sure to comment out or remove the last two lines from the script so you can add more to it: nlp_test.py ... #print(positive_tweet_tokens[500]) #print(positive_cleaned_tokens_list[500]) In this step you removed noise from the data to make the analysis more effective. In the next step you will analyze the data to find the most common words in your sample dataset. Step 5 — Determining Word Density The most basic form of analysis on textual data is to take out the word frequency. A single tweet is too small of an entity to find out the distribution of words, hence, the analysis of the frequency of words would be done on all positive tweets. The following snippet defines a generator function, named get_all_words, that takes a list of tweets as an argument to provide a list of words in all of the tweet tokens joined. Add the following code to your nlp_test.py file: nlp_test.py ... def get_all_words(cleaned_tokens_list): for tokens in cleaned_tokens_list: for token in tokens: yield token all_pos_words = get_all_words(positive_cleaned_tokens_list) Now that you have compiled all words in the sample of tweets, you can find out which are the most common words using the FreqDist class of NLTK. Adding the following code to the nlp_test.py file: nlp_test.py from nltk import FreqDist freq_dist_pos = FreqDist(all_pos_words) print(freq_dist_pos.most_common(10)) The .most_common() method lists the words which occur most frequently in the data. Save and close the file after making these changes. When you run the file now, you will find the most common terms in the data: Output[(':)', 3691), (':-)', 701), (':d', 658), ('thanks', 388), ('follow', 357), ('love', 333), ('...', 290), ('good', 283), ('get', 263), ('thank', 253)] From this data, you can see that emoticon entities form some of the most common parts of positive tweets. Before proceeding to the next step, make sure you comment out the last line of the script that prints the top ten tokens. To summarize, you extracted the tweets from nltk, tokenized, normalized, and cleaned up the tweets for using in the model. Finally, you also looked at the frequencies of tokens in the data and checked the frequencies of the top ten tokens. In the next step you will prepare data for sentiment analysis. Step 6 — Preparing Data for the Model Sentiment analysis is a process of identifying an attitude of the author on a topic that is being written about. You will create a training data set to train a model. It is a supervised learning machine learning process, which requires you to associate each dataset with a “sentiment” for training. In this tutorial, your model will use the “positive” and “negative” sentiments. Sentiment analysis can be used to categorize text into a variety of sentiments. For simplicity and availability of the training dataset, this tutorial helps you train your model in only two categories, positive and negative. A model is a description of a system using rules and equations. It may be as simple as an equation which predicts the weight of a person, given their height. A sentiment analysis model that you will build would associate tweets with a positive or a negative sentiment. You will need to split your dataset into two parts. The purpose of the first part is to build the model, whereas the next part tests the performance of the model. In the data preparation step, you will prepare the data for sentiment analysis by converting tokens to the dictionary form and then split the data for training and testing purposes. Converting Tokens to a Dictionary First, you will prepare the data to be fed into the model. You will use the Naive Bayes classifier in NLTK to perform the modeling exercise. Notice that the model requires not just a list of words in a tweet, but a Python dictionary with words as keys and True as values. The following function makes a generator function to change the format of the cleaned data. Add the following code to convert the tweets from a list of cleaned tokens to dictionaries with keys as the tokens and True as values. The corresponding dictionaries are stored in positive_tokens_for_model and negative_tokens_for_model. nlp_test.py ... def get_tweets_for_model(cleaned_tokens_list): for tweet_tokens in cleaned_tokens_list: yield dict([token, True] for token in tweet_tokens) positive_tokens_for_model = get_tweets_for_model(positive_cleaned_tokens_list) negative_tokens_for_model = get_tweets_for_model(negative_cleaned_tokens_list) Splitting the Dataset for Training and Testing the Model Next, you need to prepare the data for training the NaiveBayesClassifier class. Add the following code to the file to prepare the data: nlp_test.py ... import random positive_dataset = [(tweet_dict, "Positive") for tweet_dict in positive_tokens_for_model] negative_dataset = [(tweet_dict, "Negative") for tweet_dict in negative_tokens_for_model] dataset = positive_dataset + negative_dataset random.shuffle(dataset) train_data = dataset[:7000] test_data = dataset[7000:] This code attaches a Positive or Negative label to each tweet. It then creates a dataset by joining the positive and negative tweets. By default, the data contains all positive tweets followed by all negative tweets in sequence. When training the model, you should provide a sample of your data that does not contain any bias. To avoid bias, you’ve added code to randomly arrange the data using the .shuffle() method of random. Finally, the code splits the shuffled data into a ratio of 70:30 for training and testing, respectively. Since the number of tweets is 10000, you can use the first 7000 tweets from the shuffled dataset for training the model and the final 3000 for testing the model. In this step, you converted the cleaned tokens to a dictionary form, randomly shuffled the dataset, and split it into training and testing data. Step 7 — Building and Testing the Model Finally, you can use the NaiveBayesClassifier class to build the model. Use the .train() method to train the model and the .accuracy() method to test the model on the testing data. nlp_test.py ... from nltk import classify from nltk import NaiveBayesClassifier classifier = NaiveBayesClassifier.train(train_data) print("Accuracy is:", classify.accuracy(classifier, test_data)) print(classifier.show_most_informative_features(10)) Save, close, and execute the file after adding the code. The output of the code will be as follows: OutputAccuracy is: 0.9956666666666667 Most Informative Features :( = True Negati : Positi = 2085.6 : 1.0 :) = True Positi : Negati = 986.0 : 1.0 welcome = True Positi : Negati = 37.2 : 1.0 arrive = True Positi : Negati = 31.3 : 1.0 sad = True Negati : Positi = 25.9 : 1.0 follower = True Positi : Negati = 21.1 : 1.0 bam = True Positi : Negati = 20.7 : 1.0 glad = True Positi : Negati = 18.1 : 1.0 x15 = True Negati : Positi = 15.9 : 1.0 community = True Positi : Negati = 14.1 : 1.0 Accuracy is defined as the percentage of tweets in the testing dataset for which the model was correctly able to predict the sentiment. A 99.5% accuracy on the test set is pretty good. In the table that shows the most informative features, every row in the output shows the ratio of occurrence of a token in positive and negative tagged tweets in the training dataset. The first row in the data signifies that in all tweets containing the token :(, the ratio of negative to positives tweets was 2085.6 to 1. Interestingly, it seems that there was one token with :( in the positive datasets. You can see that the top two discriminating items in the text are the emoticons. Further, words such as sad lead to negative sentiments, whereas welcome and glad are associated with positive sentiments. Next, you can check how the model performs on random tweets from Twitter. Add this code to the file: nlp_test.py ... from nltk.tokenize import word_tokenize custom_tweet = "I ordered just once from TerribleCo, they screwed up, never used the app again." custom_tokens = remove_noise(word_tokenize(custom_tweet)) print(classifier.classify(dict([token, True] for token in custom_tokens))) This code will allow you to test custom tweets by updating the string associated with the custom_tweet variable. Save and close the file after making these changes. Run the script to analyze the custom text. Here is the output for the custom text in the example: Output'Negative' You can also check if it characterizes positive tweets correctly: nlp_test.py ... custom_tweet = 'Congrats #SportStar on your 7th best goal from last season winning goal of the year :) #Baller #Topbin #oneofmanyworldies' Here is the output: Output'Positive' Now that you’ve tested both positive and negative sentiments, update the variable to test a more complex sentiment like sarcasm. nlp_test.py ... custom_tweet = 'Thank you for sending my baggage to CityX and flying me to CityY at the same time. Brilliant service. #thanksGenericAirline' Here is the output: Output'Positive' The model classified this example as positive. This is because the training data wasn’t comprehensive enough to classify sarcastic tweets as negative. In case you want your model to predict sarcasm, you would need to provide sufficient amount of training data to train it accordingly. In this step you built and tested the model. You also explored some of its limitations, such as not detecting sarcasm in particular examples. Your completed code still has artifacts leftover from following the tutorial, so the next step will guide you through aligning the code to Python’s best practices. Step 8 — Cleaning Up the Code (Optional) Though you have completed the tutorial, it is recommended to reorganize the code in the nlp_test.py file to follow best programming practices. Per best practice, your code should meet this criteria: - All imports should be at the top of the file. Imports from the same library should be grouped together in a single statement. - All functions should be defined after the imports. - All the statements in the file should be housed under an if __name__ == "__main__":condition. This ensures that the statements are not executed if you are importing the functions of the file in another file. We will also remove the code that was commented out by following the tutorial, along with the lemmatize_sentence function, as the lemmatization is completed by the new remove_noise function. Here is the cleaned version of nlp_test.py: from nltk.stem.wordnet import WordNetLemmatizer from nltk.corpus import twitter_samples, stopwords from nltk.tag import pos_tag from nltk.tokenize import word_tokenize from nltk import FreqDist, classify, NaiveBayesClassifier import re, string, def get_all_words(cleaned_tokens_list): for tokens in cleaned_tokens_list: for token in tokens: yield token def get_tweets_for_model(cleaned_tokens_list): for tweet_tokens in cleaned_tokens_list: yield dict([token, True] for token in tweet_tokens) if __name__ == "__main__": positive_tweets = twitter_samples.strings('positive_tweets.json') negative_tweets = twitter_samples.strings('negative_tweets.json') text = twitter_samples.strings('tweets.20150430-223406.json') tweet_tokens = twitter_samples.tokenized('positive_tweets.json')[0] stop_words = stopwords.words('english'))) all_pos_words = get_all_words(positive_cleaned_tokens_list) freq_dist_pos = FreqDist(all_pos_words) print(freq_dist_pos.most_common(10)) positive_tokens_for_model = get_tweets_for_model(positive_cleaned_tokens_list) negative_tokens_for_model = get_tweets_for_model(negative_cleaned_tokens_list) positive_dataset = [(tweet_dict, "Positive") for tweet_dict in positive_tokens_for_model] negative_dataset = [(tweet_dict, "Negative") for tweet_dict in negative_tokens_for_model] dataset = positive_dataset + negative_dataset random.shuffle(dataset) train_data = dataset[:7000] test_data = dataset[7000:] classifier = NaiveBayesClassifier.train(train_data) print("Accuracy is:", classify.accuracy(classifier, test_data)) print(classifier.show_most_informative_features(10)) custom_tweet = "I ordered just once from TerribleCo, they screwed up, never used the app again." custom_tokens = remove_noise(word_tokenize(custom_tweet)) print(custom_tweet, classifier.classify(dict([token, True] for token in custom_tokens))) Conclusion This tutorial introduced you to a basic sentiment analysis model using the nltk library in Python 3. First, you performed pre-processing on tweets by tokenizing a tweet, normalizing the words, and removing noise. Next, you visualized frequently occurring items in the data. Finally, you built a model to associate tweets to a particular sentiment. A supervised learning model is only as good as its training data. To further strengthen the model, you could considering adding more categories like excitement and anger. In this tutorial, you have only scratched the surface by building a rudimentary model. Here’s a detailed guide on various considerations that one must take care of while performing sentiment analysis.
https://www.xpresservers.com/how-to-perform-sentiment-analysis-in-python-3-using-the-natural-language-toolkit-nltk/
CC-MAIN-2021-39
refinedweb
4,325
55.84
Table of Contents Though the majority of folks will use TurboGears with SQLAlchemy, there are those who have interest in running the full stack of TG with a non-relational database like MongoDB or CouchDB. There are a few settings that allow this, the most pertinent is: use_sqlalchemy: base_config.use_sqlalchemy – Set to False to turn off sqlalchemy support TurboGears takes advantage of repoze’s transaction manager software. Basically, the transaction manager wraps each of your controller methods, and should a method fail, the transaction will roll back. if you utilize the transaction manager, then the result of a successful method call results in a commit to the database. If the contoller method does not utilize the database, there is no database interaction performed. What this means is that you never have to worry about committing, or rolling back when controller code fails, TG handles this for you automatically. base_config.use_transaction_manager – Set to False to turn off the Transaction Manager and handle transactions yourself. Setup SQLAlchemy database engine. The most common reason for modifying this method is to add multiple database support. To do this you might modify your app_cfg.py file in the following manner:() This will pull the config settings from your .ini files to create the necessary engines for use within your application. Make sure you have a look at Using Multiple Databases In TurboGears for more information. Set up the transaction managment middleware. To abort a transaction inside a TG2 app: import transaction transaction.doom() By default http error responses also roll back transactions, but this behavior can be overridden by overriding base_config.commit_veto.
http://www.turbogears.org/2.1/docs/main/Config/SQLAlchemy.html#tg.configuration.AppConfig.add_tm_middleware
crawl-003
refinedweb
267
55.95
You can subscribe to this list here. Showing 2 results of 2 [Marcos] > think I'm beginning to see what your issue is. You're only informing the jython class loader about rome.jar, by adding it to sys.path. And you're not adding rome.jar to the Tomcat web application class loader, because you don't want to put rome.jar in WEB-INF/lib. But you're trying to call getResourceAsStream() on the web application class loader, not the jython class loader? Have you tried to call getResourceAsStream() directly on the jython class loader? E.g. import sys sys.classLoader.getResourceAsStream() Alan. Of course, but... suppose if tomcat is the application container, mine is a sub-application container as well. El lun, 23-03-2009 a las 16:56 +0000, Alan Kennedy escribió: > [Marcos] > > Thank you very much for taking interest. > > > > The application accepts user plugins (both .py files and java .jars). It > > usually works fine until the library in the jar tries to use resources. > > It would really be useful if I could use this kind of libraries. If I > > find no other solution, I'll have to modify rome.jar from the sources > > (which is not very complex for what I want I guess). > > > > Any other idea? > > Yes, copy the rome.jar file, containing the properties file, into > WEB-INF/lib, and your problems should be at an end. > > Alan. > > P.S. I think Apache Abdera is a far superior RSS and Atom library, > although it's been a year since I did an extensive comparison. > > ------------------------------------------------------------------------------ >. > _______________________________________________ > Jython-users mailing list > Jython-users@... > >
http://sourceforge.net/p/jython/mailman/jython-users/?viewmonth=200903&style=flat&viewday=24
CC-MAIN-2015-06
refinedweb
268
78.25
A declaration is the all-encompassing term for anything that tells the compiler about an identifier. In order to use an identifier, the compiler must know what it means: is it a type name, a variable name, a function name, or something else? Therefore, a source file must contain a declaration (directly or in an #include file) for every name it uses. A definition defines the storage, value, body, or contents of a declaration. The difference between a declaration and a definition is that a declaration tells you an entity's name and the external view of the entity, such as an object's type or a function's parameters, and a definition provides the internal workings of the entity: the storage and initial value of an object, a function body, and so on. In a single source file, there can be at most one definition of an entity. In an entire program, there must be exactly one definition of each function or object used in the program, except for inline functions; an inline function must be defined in every source file that uses the function, and the definitions must all be identical. A program can have more than one definition of a given class, enumeration, inline function, or template, provided the definitions are in separate source files, and each source file has the same definition. These rules are known as the One Definition Rules, or ODR. Before you can use an entity (e.g., calling a function or referring to an object), the compiler needs the entity's declaration, but not necessarily its definition. You can use a class that has an incomplete declaration in some contexts, but usually you need a complete definition. (See Chapter 6 for details about incomplete classes.) The complete program needs definitions for all the declared entities, but those definitions can often reside in separate source files. The convention is to place the declarations for classes, functions, and global objects in a header file (whose name typically ends with .h or .hpp), and their definitions in a source file (whose name typically ends with .cpp, .c, or .C). Any source file that needs to use those entities must #include the header file. Templates have additional complications concerning declarations and definitions. (See Chapter 7 for details.) In this and subsequent chapters, the description of each entity states whether the entity (type, variable, class, function, etc.) has separate definitions and declarations, states when definitions are required, and outlines any other rules pertaining to declarations and definitions. Some language constructs can look like a declaration or an expression. Such ambiguities are always resolved in favor of declarations. A related rule is that a declaration that is a type specifier followed by a name and empty parentheses is a declaration of a function that takes no arguments, not a declaration of an object with an empty initializer. (See Section 2.6.3 later in this chapter for more information about empty initializers.) Example 2-1 shows some examples of how declarations are interpreted. #include <iostream> #include <ostream> class T { public: T( ) { std::cout << "T( )\n"; } T(int) { std::cout << "T(int)\n"; } }; int a, x; int main( ) { T(a); // Variable named a of type T, not an invocation of the T(int) // constructor T b( ); // Function named b of no arguments, not a variable named b of // type T T c(T(x)); // Declaration of a function named c, with one argument of // type T } The last item in Example 2-1 deserves further explanation. The function parameter T(x) could be interpreted as an expression: constructing an instance of T with the argument x. Or it could be interpreted as a declaration of a function parameter of type T named x, with a redundant set of parentheses around the parameter name. According to the disambiguation rule, it must be a declaration, not an expression. This means that the entire declaration cannot be the declaration of an object named c, whose initializer is the expression T(x). Instead, it must be the declaration of a function named c, whose parameter is of type T, named x. If your intention is to declare an object, not a function, the simplest way to do this is not to use the function-call style of type cast. Instead, use a keyword cast expression, such as static_cast<>. (See Chapter 3 for more information about type casts.) For example: T c(static_cast<T>(x)); // Declares an object named c whose initial value is // x, cast to type T This problem can crop up when you least expect it. For example, suppose you want to construct a vector of integers by reading a series of numbers from the standard input. Your first attempt might be to use an istream_iterator: using namespace std; vector<int> data(istream_iterator<int>(cin), istream_iterator<int>( )); This declaration actually declares a function named data, which takes two parameters of type istream_iterator<int>. The first parameter is named cin, and the second is nameless. You can force the compiler to interpret the declaration as an object definition by enclosing one or more arguments in parentheses: using namespace std; vector<int> data((istream_iterator<int>(cin)), (istream_iterator<int>( ))); or by using additional objects for the iterators: std::istream_iterator<int> start(std::cin), end; std::vector<int> data(start, end);
http://etutorials.org/Programming/Programming+Cpp/Chapter+2.+Declarations/2.1+Declarations+and+Definitions/
crawl-001
refinedweb
886
50.57
Program Linker Hall Sensor on PcDuino With Python Introduction: Program Linker Hall Sensor on PcDuino With Python Linker Hall Sensor Module is a Hall sensor. aswitch in this configuration. Commonly seen in industrial applications such as the pictured pneumatic cylinder, they are also used in consumer equipment; for example some computer printers use them to detect missing paper and open covers. When high reliability is required, they are used in keyboards. When no magnet near the Linker Hall Senso , the RX side will continue to output a high level (logic 1), if the magnet near it, the RX terminal will output a low level (logic 0) .So we can use pcDuino GPIO read the RX level to determine whether there is a magnet near the Hall Sensor, the following is a magnetic switch routine. Step 1: Part List 1. pcDuino V2 x1 2. Linker Hall Sensor x1 3. Linker LED x1 4. Linker Base x1 5. Linker cable x2 6. Magnet x1 Step 2: Wiring Diagram Step 3: Test Code import gpio from time import sleep led_pin = "gpio2" sensor_pin = "gpio4" def delay(ms): sleep(1.0*ms/1000) def setup(): gpio.pinMode(led_pin, gpio.OUTPUT) gpio.pinMode(sensor_pin, gpio.INPUT) print " Linker LED Pin : D2 \n Hall Sensor Pin : D4" def loop(): while(1): if(gpio.digitalRead(sensor_pin)): gpio.digitalWrite(led_pin, gpio.HIGH) else : gpio.digitalWrite(led_pin, gpio.LOW) setup() loop() Step 4: Test Result (1) Place Magnet far from Hall Sensor, RX outputs high level, the Linker LED is on: 2. A magnet near Hall Sensor, RX output low level, the red LED lights on Hall Sensor is on,Linker LED is off: Following my twitter @pcDuino_NO1 Nice.
http://www.instructables.com/id/Program-Linker-Hall-Sensor-on-pcDuino-with-Python/
CC-MAIN-2017-43
refinedweb
279
56.96
lstat(), lstat64() Get information about a file or directory Synopsis: #include <sys/stat.h> int lstat( const char* path, struct stat* buf ); int lstat64( const char* path, struct stat64* buf ); Arguments: - path - The path of the file or directory that you want information about. - buf - A pointer to a buffer where the function can store the information. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The lstat() and lstat64() functions obtain information about the file or directory referenced in path. This information is placed in the structure located at the address indicated by buf. The lstat64() function is a 64-bit version of lstat(). The results of the lstat() function are the same as the results of stat() when used on a file that isn't a symbolic link. If the file is a symbolic link, lstat() returns information about the symbolic link, while stat() continues to resolve the pathname using the contents of the symbolic link, and returns information about the resulting file. Examples: /* * Iterate through a list of files, and report * for each if it is a symbolic link */ #include <stdio.h> #include <stdlib.h> #include <sys/stat.h> #include <unistd.h> int main( int argc, char **argv ) { int ecode = 0; int n; struct stat sbuf; for( n = 1; n < argc; ++n ) { if( lstat( argv[n], &sbuf ) == -1 ) { perror( argv[n] ); ecode++; } else if( S_ISLNK( sbuf.st_mode ) ) { printf( "%s is a symbolic link\n", argv[n] ); } else { printf( "%s is not a symbolic link\n", argv[n] ); } } return( ecode ); } Classification: lstat() is POSIX 1003.1; lstat64() is Large-file support Last modified: 2013-12-23
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/l/lstat.html
CC-MAIN-2014-15
refinedweb
280
64.41
gpio — General Purpose Input/Output gpio* at ath? gpio* at bcmgpio? (arm64, armv7) gpio* at elansc? (i386) gpio* at glxpcib? (i386) gpio* at gscpcib? (i386) gpio* at isagpio? gpio* at nsclpcsio? gpio* at omgpio? (armv7) gpio* at pcagpio? gpio* at pcaled? gpio* at skgpio? (amd64, i386) gpio* at sxipio? (arm64, armv7) gpio0 at voyager? (loongson) #include <sys/types.h> #include <sys/gpio.h> #include <sys/ioctl.h> The gpio device attaches to the GPIO controller and provides a uniform programming interface to its pins. Each GPIO controller with an attached gpio device has an associated device file under the /dev directory, e.g. /dev/gpio0. Access from userland is performed through ioctl(2) calls on these devices. The layout of the GPIO device is defined at securelevel 0, i.e. typically during system boot, and cannot be changed later. GPIO pins can be configured and given a symbolic name and device drivers that use GPIO pins can be attached to the gpio device at securelevel 0. All other pins will not be accessible once the runlevel has been raised. The following structures and constants are defined in the <sys/gpio.h> header file: GPIOINFOstruct gpio_info struct gpio_info { int gpio_npins; /* total number of pins available */ }; GPIOPINREADstruct gpio_pin_op #define GPIOPINMAXNAME 64 struct gpio_pin_op { char gp_name[GPIOPINMAXNAME]; /* pin name */ int gp_pin; /* pin number */ int gp_value; /* value */ }; The gp_name or gp_pin field must be set before calling. GPIOPINWRITEstruct gpio_pin_op GPIO_PIN_LOW(logical 0) or GPIO_PIN_HIGH(logical 1). On return, the gp_value field contains the old pin state. GPIOPINTOGGLEstruct gpio_pin_op GPIOPINSETstruct gpio_pin_set #define GPIOPINMAXNAME 64 struct gpio_pin_set { char gp_name[GPIOPINMAXNAME]; /* pin name */ int gp_pin; /* pin number */ int gp_caps; /* pin capabilities (ro) */ int gp_flags; /* pin configuration flags */ char gp_name2[GPIOPINMAXNAME]; /* new name */ }; The gp_flags field is a combination of the following flags: GPIO_PIN_INPUT GPIO_PIN_OUTPUT GPIO_PIN_INOUT GPIO_PIN_OPENDRAIN GPIO_PIN_PUSHPULL GPIO_PIN_TRISTATE GPIO_PIN_PULLUP GPIO_PIN_PULLDOWN GPIO_PIN_INVIN GPIO_PIN_INVOUT Note that the GPIO controller may not support all of these flags. On return the gp_caps field contains flags that are supported. If no flags are specified, the pin configuration stays unchanged. Only GPIO pins that have been set using GPIOPINSET will be accessible at securelevels greater than 0. GPIOPINUNSETstruct gpio_pin_set GPIOATTACHstruct gpio_attach struct gpio_attach { char ga_dvname[16]; /* device name */ int ga_offset; /* pin number */ u_int32_t ga_mask; /* binary mask */ }; GPIODETACHstruct gpio_attach GPIOATTACHioctl(2). The ga_offset and ga_mask fields of the gpio_attach structure are ignored. The gpio device first appeared in OpenBSD 3.6. The gpio driver was written by Alexander Yurchenko <grange@openbsd.org>. Runtime device attachment was added by Marc Balmer <mbalmer@openbsd.org>. Event capabilities are not supported.
https://man.openbsd.org/gpio
CC-MAIN-2021-43
refinedweb
424
52.05
Created on 2009-04-28 14:24 by della, last changed 2010-08-17 00:53 by benjamin.peterson. This issue is now closed. Is there a way to define an abstract classmethod? The two obvious ways don't seem to work properly. Python 3.0.1+ (r301:69556, Apr 15 2009, 17:25:52) [GCC 4.3.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import abc >>> class C(metaclass=abc.ABCMeta): ... @abc.abstractmethod ... @classmethod ... def f(cls): print(42) ... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in C File "/usr/lib/python3.0/abc.py", line 24, in abstractmethod funcobj.__isabstractmethod__ = True AttributeError: 'classmethod' object has no attribute '__isabstractmethod__' >>> class C(metaclass=abc.ABCMeta): ... @classmethod ... @abc.abstractmethod ... def f(cls): print(42) ... >>> class D(C): pass ... >>> D.f() 42 Please ask questions like this first on python-list or the c.l.p or gmane mirrors. @abstractmethod @classmethod def ... doesn't work because classmethod objects doesn't have a __dict__, so setting arbitrary attributes don't work, and abstractmethod tries to set the __isabstractmethod__ atribute to True. The other order: @classmethod @abstractmethod def ... doesn't work, because the abstractmethod decorator sets the function's __isabstractmethod__ attribute to True, but when ABCMeta.__new__ checks the object in the namespace of the class, it won't find it, because the classmethod object won't have an __isabstractmethod__ attribute. The situation is the same with staticmethod. One possible solution would be adding a descriptor to classmethod (and staticmethod), with the name "__isabstractmethod__", which on __get__ would check its underlying callable for this attribute, and on __set__ would set this attribute on that callable. I think this way both order should work. Here is a patch, which adds a descriptor to classmethod and staticmethod. Pseudocode: __get__(self, inst, owner): if getattr(inst.callable, '__isabstractmethod__', False): return True return False __set__(self, inst, value): inst.callable.__isabstractmethod__ = bool(value) The patch doesn't check that instantiating these methods work at all. If I understand correctly, some tests are needed for the instantiation of classes with abstract static/classmethods. I added them in issue5867a.diff. Thank you. I'm not an ABC expert but it looks ok. Guido, what do you think? As you figured out it is not yet supported. I object to making changes to the classmethod implementation. I expect the best thing to do is to add a new @abc.abstractclassmethod decorator defined in pure Python (maybe the definition of abstractproperty provides a hint on how to do this). You may want to define @abc.abstractstaticmethod as well. I'm attaching a new patch adding the abc.abstractclassmethod and abc.abstractstaticmethod decorators. The patch looks fine code-wise, but it also needs a doc addition in Doc/library/abc.rst. I'm attaching a new patch containing also some documentation for the two new decorators. The doc is rather terse, and english is not my first language, so please let me know if some corrections are needed. Looks good. Applied in r84124. Thanks for the patch.
http://bugs.python.org/issue5867
CC-MAIN-2014-52
refinedweb
517
61.12
Getting started with Visual Studio development for ASP.NET Core RC2 connected to a PostgreSQL database using Entity Framework code first all deployed to a Ubuntu Linux server* *While being developed in a Windows environment using Visual StudioWow! That's a mouthful of a heading! Alright, so Microsoft just released ASP.NET Core RC2 a week ago. ASP.NET Core RC2 is a cross platform web framework which allows a traditional c# web developer who is used to using tools like Visual Studio to develop applications for Windows servers can now target much cheaper Linux servers. Hurray! That's great. But what about databases? Well, Microsoft has announced they will be releasing MS-SQL Server 2016 on Linux, which is pretty incredible. But it's still going to cost money. PostgreSQL is a free open source SQL database which there is an open source package written (Npgsql) which provides Entity Framework support to PostgreSQL.. Basically what this means, is that you can develop with Visual Studio using the traditional ASP.NET MVC/webAPI with Entity Framework and Controller and View generation goodness you are used to, and now deploy to a free server with a free database. Oh and did I mention, Visual Studio Community 2015 is free as well! It's never been a better time to be a c# developer. And in fact, any one can now use this toolset, so I expect it will grow in popularity. Getting started with ASP.NET Core RC2 from a Windows and Visual Studio perspective This is part 2 of a 2 part blog series for getting started with ASP.NET RC2 with Entity Framework connected to PostgreSQL deployed to an Ubuntu Linux server (In part 1 of this series the Ubuntu Linux server is created, setup and configured). This blog entry will go over getting started on the Windows development experience, and that will include creating a basic app, and deploying to the linux server. You will either need to run Windows 7+ in a virtual machine (which is what I do on my MacBook using Virtual Box running Windows 7) or you will need to be running it natively. You will either need to run Windows 7+ in a virtual machine (which is what I do on my MacBook using Virtual Box running Windows 7) or you will need to be running it natively. Let's get started with download and install Visual Studio Community (It's free): Install Microsoft ASP.NET and Web Tools Preview 1 tooling for Microsoft .NET Core 1.0.0.0 RC2: Creating our example ASP.NET Core RC2 Application in Visual Studio. As soon as you hit ok the app will be created and the packages will need to be restored. This will add folders and stuff that aren't immediately available until all the packages have restored. This takes about 15-45 seconds or so. Although it may be dependent on internet connection speed. The restoration process is indicated by an icon next to the references in the solution explorer. Add PostgreSQL EntityFramework data provider Npgsql In order to connect Entity Framework to PostgreSQL a package named Npgsql must be added to the project. It's an open source project, check it out. Inside Visual Studio open up the Package manager console and run the following command: Install-Package Npgsql.EntityFrameworkCore.PostgreSQL -Pre This command will cause Visual Studio to add the appropriate references to the project.json file and subsequently restore (download) the references for Npgsql. The -Pre is for RC2. The process of this restoration will again be indicated by the references icon in the solution explorer. Change database connection string to point to PostgreSQL Next, the default connection information inside the appsettings.json file needs to be changed. Inside the ConnectionStrings json object change the value for the DefaultConnection property. Change the default connection from: "Server=(localdb)\\mssqllocaldb;Database=aspnet-POengine-bb847c65-bea9-4d5e-9bd6-627efd36bfc7;Trusted_Connection=True;MultipleActiveResultSets=true" Which will target a local ms-sql server database. Replace this value with one that will target our PostgreSQL database: Replace the values for *pgsql-user* and *pgsql-user-pass* with the user name and password combination that was created for the PostgreSQL database in Step 3 of the previous blog post when we were configuring the server. The value *Ubuntu-IP-Address* should also be changed to the appropriate IP address for the Ubuntu server. This was also acquired in the previous blog post. Create a simple example Model class named 'Item' Now create a plan old object in the models folder, call it 'Item.cs'. This will be our simple model for this example app. To anyone who has done Entity Framework code first, this should look very familiar, as it is the exact same for type of model file. Type or copy the code below into Item.cs: namespace unape.Models { public class Item { public int Id { get; set; } public string Name { get; set; } public float Price { get; set; } } } Create a database context and add the 'Item' Model to the context Now we need to create a database context so we can connect our Item object model to the actual database tables. This is the same way as traditional entity framework code first, however we have to add an additional construction to the database context file that accepts options and passes them down to a base constructor. Create a new file inside the Data folder and name it 'unapeDbContext.cs', write this code in it: using Microsoft.EntityFrameworkCore; using unape.Models; namespace unape.Data { public class unapeDbContext : DbContext { public unapeDbContext(DbContextOptions<unapeDbContext> options) : base(options) { } public DbSet<Item> Items { get; set; } } } Update method in Startup.cs to use Npgsql and add database context Now we must modify the configuration of the application to use PostgreSQL instead of the default MS-SQL. This is done inside the 'Startup individual account authentication database context to use the PostgreSQL with the connection string we modified earlier. We need to add an additional line right underneath this one to add the database context for the 'unapeDbContext', this is done by the following line: services.AddDbContext<unapeDbContext>(options => options.UseNpgsql(Configuration.GetConnectionString("DefaultConnection"))); Use dotnet cli for migrations and updating the datatbase At this point the application is ready for database migrations. In previous versions of Visual Studio the Visual Studio Package manager was used to do entity framework database migrations. In ASP.NET Core the dotnet command line interface(cli) is used. Open 'cmd.exe' and navigate to the directory of the unape project's src folder. (this is done by using the command 'cd' and the directory's name you want to enter... But I expect you know how to do this, if not google will help.) Once in the unape project's src folder, type the command: dotnet ef migrations add firstMig -c unapeDbContext This will create a folder named migrations inside the project, along with an initial migration file and a 'firstMig' file. These migrations will be used to update the database with the following command: dotnet ef database update -c unapeDbContext The migrations will be executed on the PostgreSQL database which will cause the Item table to be created with the columns corresponding to the Item.cs classes properties. Now we need to update the database for the individual account authentication database context, type the following command: dotnet ef database update -c ApplicationDbContext Now the migrations for the tables associated with user names, passwords and roles will be added to the PostgreSQL database. As you can probably tell, the '-c' flag is for declaring the database context name. Scaffold the controller and views for the 'Item' Model At this point we have the database ready to roll, and now we need to add a way to create, read, update, and delete Item's. Visual Studio has had a wonderful tool for doing this rapidly for quite a while now, it's called scaffolding. Right click inside of the 'Controllers' folder and select Add-> New Controller. Select MVC Controller with views, using Entity Framwork and click 'Add'. The Model class: should be entered as 'Item', it will auto populate as you begin to type, and the Data context class should be 'unapeDbContext'. Leave all the defaults selected, and click 'Add'. This will a generate five views and a controller with corresponding corresponding actions. Add link and test run the app In order to easier access the scaffolded Item views, let's add a link inside our 'Views/Shared/_Layout.cshtml' file. Inside the navbar unordered list, add an additional list item: <li><a asp-Items</a></li> At this point we are ready to test run the app inside visual studio. Press F5 to build and launch the application, this will automatically bring up a browser and navigate to the locally running site. Once the app is launched, clicked the 'Items' link in the navbar toward the top right of the page. To ensure we have everything completely setup correctly, create a new item. Everything should work correctly, and the new item will show up in the index page. At this point we are ready to bundle everything up and publish our example app onto our Ubuntu Linux server. Publish the app to Ubuntu using ftp Stop the app running in visual studio. Right click the project name in the solution explorer, and select publish. We are planning on using the ftp server we setup on the Ubuntu server, but there isn't a publish option with ftp. What we will do is deploy to the file system and then copy the files over to the ftp server. Select the file system as the publish target, and give it some profile name. The default target location is inside the bin\release\publishoutput\ folde. This is fine. Click Publish. Now we will ftp into our Linux box via Windows explorer, inside the windows explorer toolbar, type in ftp://'IP-address-of-Ubuntu' for me this looks like: With another windows explorer folder navigate to the unape's project bin\release\publishoutput\ folder and copy the contents to the ftp server. Run the published ASP.NET RC2 web application on Ubuntu Once the files have finished coping to the Ubuntu server, hop back onto the terminal and navigate to the publish output folder which was copied over via ftp. The files will be in the Ubunut user's home directory. Run the following command to start the web application: dotnet unape.dll Once the application launches it will begin listening on localhost port 5000. We previously configured nginx to take external requests on port 80 and route them to localhost port 5000. Now we can open up a web browser and type in, 'http://*ip-address-of-Ubuntu*', and bam baby! ASP.NET Core with Entity Framework connected to a PostgreSQL database all hosted on Ubuntu Linux! How awesome is that!? Nice info on visual studio, keep updating post with us Bitcoin payment for e commerce development | Pay Bitcoin for Ecommerce Development | web design Hubli
http://totaltechware.blogspot.com/2016/05/aspnet-core-rc2-web-app-with-postgresql.html
CC-MAIN-2017-26
refinedweb
1,846
54.32
Introducing Inject, an open source JavaScript dependency management library for the browser pleased to announce the result of that effort: an open source library called Inject. Inject is a CommonJS Compliant loader that runs in the browser and makes dependency management ridiculously easy. Read on to find out how. The dependency management problem Imagine you had three JavaScript modules with simple inter-dependencies: program.js depends on hello.js, which, in turn, depends on name.js. Each developer that wants to use program.js has to dig through its code, figure out its dependencies, and manually include them: Unfortunately, with a large JavaScript codebase, the dependency graphs are much more complicated, and it can be very difficult to figure out what you need to include for any given module. Worse yet, if a module's dependencies change, you have to update the script tags on every page that imports it. JSON manifests and "wrapper" build scripts help somewhat, but eventually become unmanageable themselves. The Inject solution With Inject, adding program.js is a no-brainer: you just load Inject on your page, specify the root path for your modules, and then call run. And that's it! The dependency management downstream from program.js is totally transparent. This is because the modules themselves specify their dependencies using the require keyword. For example, program.js specifies its dependency on hello.js: In turn, hello.js specifies its dependency on name.js: Inject finds the dependencies automatically and loads them asynchronously. If a developer changes some downstream dependency - for example, changes hello.js to depend on new-name-module.js instead of name.js - your code will keep working because Inject will automatically find and download the new dependencies on the next page load. Features Inject is a young library, but it already supports a significant feature list: - Simple interface - Modular code that doesn't pollute the global namespace - CommonJS compliant - Supports the AMD API - Supports "bundled" files (multiple files concatenated) - Supports single-file retrieval during development - Works cross-domain - Takes advantage of modern browser features where possible (e.g. postMessage and localStorage) Modularity and reusability were at the heart of our decision to create Inject. LinkedIn has several dozen front-facing "properties" (pages, sites, products, modules, etc), some with overlapping JavaScript needs and some with unique ones. Before Inject, we would create one "global" JS payload with resources common to all pages, plus a page-specific payload for each property. While this kept the total number of http requests down, it was difficult to manage the dependencies. Moreover, we couldn't load things on demand, which is increasing in importance as we move more logic to the client. Using Inject, we have a much more maintainable solution and we're excited to share it with the open source community. Try it out! Here's everything you need to take Inject for a test drive: Get involved Our goal with Inject is to build a dependency management library that presents a very simple interface but can still support the variety of complex situations that we face with a site like LinkedIn. It's a balance we're going to strive to maintain as we continue to grow the project, and we'd love some help! Feel free to fork Inject on Github and send us some pull requests. Or better yet, apply to join our Web Development Group, and get involved with projects like Inject from day one!
http://engineering.linkedin.com/comment/115
CC-MAIN-2013-20
refinedweb
576
56.55
One interesting aspect of each new version of Microsoft's flagship IDE Visual Studio is how projects are created — that is, what is included and how resources and files are organized. You can learn a lot about the product as well as the underlying platform by examining what is offered. Here's a look at what is included in a new ASP.NET Web Forms site created with Visual Studio 2013. Starting from scratch There are a variety of ways to create a new site with Visual Studio 2013, but I will follow the most straightforward path for this article. Figure A shows choosing New Web Site from the File menu, which presents the New Web Site window (Figure B). For the purposes of this article, I choose ASP.NET Web Forms Site from the templates presented in Figure B. Once the selection is made, the disk drive churns while the ASP.NET 4.5 site is created with the results shown in Figure C. Figure A Creating a new website in Visual Studio 2013 Figure B The various options for creating a new website. Figure C The ASP.NET Web Forms site created by Visual Studio 2013. A quick review of source code for Default.aspx displayed in Figure C is revealing if you've worked with Bootstrap, as the CSS classes like jumbotron are clearly from that framework; this should not be a surprise, as Microsoft has been touting Bootstrap integration for some time. In addition to Bootstrap, it uses a number of other standard or popular JavaScript frameworks such as jQuery and Modernizr. Now that the project is created, let's take a closer look at what is included. What are all of these files? A quick review of the project's files shows the bootstrap CSS files located in the Content directory. It includes both the full (bootstrap.css) and minified version (bootstrap.min.css) — the associated JavaScript files are in the Scripts directory. It is worth noting that it utilizes Bootstrap 3.0 — it was a close call, as the latest version of Bootstrap was available not long before Visual Studio 2013 was released. The Bootstrap files are only a fraction of the files included in the project. The base directory's files and subdirectories are shown in the right-hand portion of Figure C. The following list provides more details on the folders. - Account: This directory contains the web pages used to provide authorization. It is the security and logon files, so there are files for user registration (Register.aspx), managing an account (Manage.aspx), logging in (Logon.aspx), and more. - App_Code: This is where shared source code (things like shared classes or business objects) should be created. By default, there are classes for working with Friendly URLs (routes) and ASP.NET Identity features as well as others. - App_Data: This folder contains application data files like XML or other data stores. It is empty when a project is created by Visual Studio 2013. - bin: This contains all of the compiled assemblies referenced by the application and the application itself. When the project is created, it is populated by all of the DLLs used by the application. This includes ASP.NET Identity, Entity Framework, OWIN, WebGrease, and many more (Figure D). - Content: This is where the Bootstrap and other CSS files are stored. You will most need to create your own CSS for new features or to override default settings, so CSS should be placed in this directory. - fonts: Any special fonts to be used by the application are here. By default, the Bootstrap offerings are included. - Scripts: This directory contains all of the JavaScript used in the application. Figure E shows you what is included when the site is created. The base directory has the necessary files for Bootstrap, jQuery, Modernizr, and the respond libraries. In addition, the WebForms subdirectory has JavaScript files for its features. You should place all of your custom JavaScript code in the base Scripts directory. Figure D The contents of the bin subdirectory for a newly created project. Figure E The contents of the Scripts subdirectory for the website. As for the files on the site, there are web pages for content (About.aspx, Contact.aspx, and Default.aspx for the home page) and the standard ASP.NET application file (Global.asax). Figure F shows the site loaded (the default page), which clearly demonstrates a standard Bootstrap layout. The following list provides more details on the remaining files in the base project directory. - Bundle.config: This file is used to bundle resource files (like CSS and JavaScript) to reduce load time. This configuration file allows you to specify how resource types are bundled. - favicon.ico: A standard website feature that allows you to associate an icon with the site. - packages.config: This file is used to track installed packages and respected versions. - Site.master: The master page for the site. - Site.Mobile.master: The mobile master page for the site. - ViewSwitcher.ascx: This control can be used on pages to allow users to switch between desktop and mobile versions of a page. The control can be placed on a page or in the master page for all pages, and the user will have links for switching to mobile version or desktop if viewing on a mobile device. - Web.config: The configuration file for the site. A review of this file shows references to Entity Framework, WebGrease, and ASP.NET Identity included by default. Figure F The default page of the standard ASP.NET Web Forms site loaded in Chrome. What does it mean? A review of what is included with our project gives us a glimpse of the new .NET technologies as well as Microsoft's vision of web development. It may not be a surprise that new or enhanced features like the ASP.NET membership system (ASP.NET Identity), OWIN, and the Entity Framework are included by default, but I found it interesting to see DLL files for ANTLR and WebGrease and the Web.Optimization namespace as well. It seems Microsoft is serious about improving the performance of its platform and user experience, and the fact the company is embracing web standards and frameworks is wonderful. While this article only scratches the surface of ASP.NET web development with an overview of a basic website, I hope it provides a starting point for asking lots of questions and learning more..
http://www.techrepublic.com/blog/software-engineer/create-an-aspnet-web-forms-website-with-visual-studio-2013/
CC-MAIN-2016-22
refinedweb
1,072
67.04