text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
[SOLVED] QT and boost : cannot find -llibboost_filesystem...
- zarachbaal last edited by zarachbaal
Hi,
I work with (and I cannot an other version of Qt) :
- QT Creator 2.4.1 and QT 4.8.4 (with Mingw compiler)
- boost 1.52
To build boost :
- i first added "C:/Qt/qtcreator-2.4.1/mingw/bin" to my PATH variable
- then opened a command prompt
- went to "C:/boost_1_52"
- typed "bootstrap.bat mingw"
- and then "b2 toolset=gcc build-type=complete stage" (this took a long time (about 2h))
I would like to use QT creator with the boost library but I come up against difficulties.
My project is very simple, it's only basic QT code :
QCoreApplication a(argc, argv) return a.exec()
I just added some includes :
#include <boost/thread.hpp> #include <boost/property_tree/json_parser.hpp> #include <boost/asio.hpp> #include <boost/thread.hpp>
In my .pro file I have added :
INCLUDEPATH += C:/boost_1_52 LIBS += -L$quote(C:/boost_1_52/stage/lib) \ -llibboost_filesystem-mgw48-mt-d-1_52 \ -llibboost_system-mgw48-mt-d-1_52
When I run qmake and then try to build my project it fails with the error
cannot find -llibboost_filesystem-mgw48-mt-d-1_52 collect2: ld returned 1 exit status
I tried to replace "-llibboost_filesystem..." with "-lboost_filesystem..." but it does the same.
I checked and I have got these to files in the C:/boost_1_52/stage/lib
(actually I have 3 "libboost_filesystem-mgw48-mt-d-1_52" : one .a, one .dll and one .dll.a) (it's the same for "libbost_system-mgw48-mt-d-1_52")
If I remove the "-llibboost_filesystem..." and "-llibboost_system..." I've got multiple errors like
"undefined reference to WSAStartup@8" "undefined reference to boost::system::generic_category()"
Thank you in advance for your help.
On a general note, I would always discourage the simultaneous usage of boost and Qt. It makes the code inconsistent, boost in general is more error-prone than the corresponding Qt classes and it is duplicating functionality. Qt is an application development framework, not a GUI library.
While it is technically possible, it is really awkward in design. Many beginners make that mistake.
Unfortunately I am working on a project that has been started by someone else and so I have no choice but using Qt and Boost.
I forgot to say that when I built boost, it said
"failed updating 32 targets"
"skipped 64 targets
"updated 4599 targets"
I do not know if this is normal or not
Hi zarachbaal,
for your includes
#include <boost/thread.hpp> #include <boost/property_tree/json_parser.hpp> #include <boost/asio.hpp> #include <boost/thread.hpp>
you dont need boost_filesystem. Asio generates the errors.
"undefined reference to WSAStartup@8" "undefined reference to boost::system::generic_category()"
Asio needs to be link against to additional libraries. Add to your
LIBS +=
1.) Boost_system:
-LC:/path/to/boost/libsand
-llibboost_system-mgw48-mt-1_52
2.) On Windows, WS2_32.lib:
-LC:/path/to/winsock2/liband
-lWS2_32
This is tested with boost 1_57 and mingw49 on my win7 system.
Hope it helps.
The errors about WSAStartup@8, WSACleanup@0... disappeared, that's a start !
But I still have a bunch of undefined reference errors.
A lot have "thread" in their name like :
undefined reference to `_imp___ZN5boost6threadC1Ev' undefined reference to `_imp___ZN5boost6thread4joinEv'
So I guess I also have to add the 'libboost_thread-mgw48-mt-1_52' library ?
But I only have '.a' files in 'stage/lib' (no '.dll" or '.dll.a' files), and if I do '-llibbost_thread-mgw48-mt-1_52' it says it cannot find the library
Maybe the lack of '.dll' and '.dll.a' is due to the 'failed updating 32 targets' when I built boost ?
Also, I still have some
undefined reference to boost::system::generic_category()
undefined reference to boost::system::system_category()
I forgot to say, that I had to reorder the header includes to get a build.
#include <boost/asio.hpp> #include <boost/property_tree/json_parser.hpp> #include <boost/thread.hpp>
- cybercatalyst last edited by
I can only suggest you talk with the one introducing boost (if you have to) and see whether you can get rid of it for obvious reasons. You will save yourself a lot of unnecessary work. Before trying that, I wouldn't investigate too much, this is lost time.
Other than that, I am out here.
@cybercatalyst
I agree, but this wont solve the problem.
- cybercatalyst last edited by
It does solve the problem, because you the errors you see are linker errors caused by boost. If you just stick to the Qt classes like intended, you will not have those.
@zarachbaal said:
undefined reference to `_imp___ZN5boost6threadC1Ev
Hi,
You're not linking to the boost_thread library.
You missing something like:
LIBS += -lboost_thread-mgw48-mt-1_52
or similar
In fact I'm using the EyeTribe eye tracking device and the SDK uses the boost library.
So I have no choice but using it.
No offense, but in that case you should contact their support instead asking here. I hope you understand, this is not something the community could/should help you with as the source of problems clearly is the usage of boost.
(in the sense to make them aware of their fault, you should request them to fix it)
Using boost and qt together is not uncommon. There are indeed some libraries in boost that have no equivalent replacement in qt. Sure this seems not to be a qt issue, but if you do not want to help, just leave it. I for one, as part of the community saved a lot of time looking at posts like this. And by the way the valuable answers have never been something like "dont do that" or "ask someone else".
I added "-lboost_thread-mgw48-mt-1_52" at the end of my "LIBS +=..." line.
It finds the library but I still have those errors...
I also changed the order of my includes as sneubert suggested but that did not help
I will contact them about this but I have to make it work on Qt for the next week...
Using boost and qt together is not uncommon.
Yes, you are right! And it's the source for a whole bunch of unnecessary issues. That's why I find it important to state it's the wrong way to go. The fact that boost and qt are being used side-by-side comes from the misconception that people want to somehow get rid of Qt on non-GUI code in lower level stuff because Qt is GUI-only. This is very bad and it's important to remind that Qt is an application framework.
This is the only reason I am writing this and if it makes people to rethink then this has been very useful imo.
Did you check that the library name is correct ? I've based it on the ones you're already using so I might have missed something
Yes I checked and the library is here, i even copy/pasted the name to be sure
Qt does find the library, if I enter a wrong library name it gives an error (error : cannot find "false_library_name")
As I said before I only have "libboost_thread-mgw48-mt-1_52.a" file.
When I had '.a', '.dll' and '.dll.a' files for "libboost_system..."
Maybe it has something to do about it ?
You can try adding
DEFINES += BOOST_THREAD_USE_LIB
- zarachbaal last edited by zarachbaal
I checked my boost lib dir and I do have a libboost_thread-mgw... .dll and .lib so maybe you investigate the build of the boost libraries.
I downloaded and built boost 1.58.
This time I do have '.a', '.dll' and '.dll.a' for libbost_thread...
But it does not change anything
I don't recall doing anything in particular but the errors about 'thread' are not the same anymore.
They were like :
undefined reference to _imp___ZN5boost6threadC1Ev'
undefined reference to _imp___ZN5boost6thread4joinEv'
And now :
undefined reference to boost::thread::thread()'
undefined reference to boost::thread::joinable() const'
I just noticed that I have another error :
cc1plus.exe:-1: erreur : note: initialized from here
file not found : cc1plus.exe
cc1plus.exe is the cpp compiler invoked by gcc.
Do you have another MinGW installation on your system?
Maybe you have some references in your local or global PATH
environment variable pointing to the wrong MinGW bin dir?
I have MinGW installed in C:/MinGW
And also the one that comes with Qt Creator.
In my PATH I have set "C:\Qt\qtcreator-2.4.1\mingw\bin"
Nevermind, this error appeared because I changed the MinGW to use in Project options.
If I select the MinGW that comes with Qt Creator this error disappears.
I may have an idea why it's not working.
I did not see, but I had Strawberry perl installed and it has mingw.
In my path I had "C:\Strawberry\c\bin" set, so I built boost using this mingw (v4.8.1) and the mingw that comes with Qt creator is v4.4.0
I removed strawberry from the path.
I am currently rebuilding boost with the correct mingw, will update tomorrow.
I´m curious about it.
Never modify PATH when you are developing (it should also be avoided as much as possible the rest of the time), it's in the same category as developing as root on linux ;)
More seriously, having MinGW or the Qt bin path in your PATH is an open door to a world of problem since you will be thing you are using one version of a library while in fact using another one. Also, MinGW isn't always compatible between two versions of their compiler. That's probably why you had problem in the first place since you where using two different versions.
In any case, one rule that you should apply on Windows: Use the same compiler for all your code and dependencies. It's not always possible but it will greatly simplify your life.
That was it, it works fine now !
@SGaist : I did not notice in the first place that I was using two versions of MinGW.
When I installed Strawberry some weeks ago he added himself his path to the environement variable PATH.
So I thought I was compiling boost with the MinGW from Qt Creator, I did not pay enough attention to the 'mg48' in librairies' name that should have been 'mg44'.
How do I make this subject "solved" ?
Windows can be tricky for that
IF you don't have the option in the Topic Tools menu, just edit the thread title and prepend [solved]
OK, thank you all for your help :) | https://forum.qt.io/topic/53532/solved-qt-and-boost-cannot-find-llibboost_filesystem | CC-MAIN-2022-40 | refinedweb | 1,753 | 65.73 |
.
1998 .viii LIST OF FIGURES Revised: December 2.
taught at Trinity College Dublin since 1992. 1998 . Before starting to work through this book. The following sections briefly address these questions from the perspective of the outsider. the present book is aimed at advanced undergraduate students of either mathematics or economics who wish to branch out into the other subject. subject matter and scientific methodology of mathematics. The book is not intended as a substitute for students’ own lecture notes.bess. subject matter and scientific methodology of economics while economics students should think about the nature. mathematics students should think about the nature. Revised: December 2.PREFACE ix PREFACE This book is based on courses MA381 and EC3080. mathematics graduates have been increasingly expected to have additional skills in practical subjects such as economics and finance. Comments on content and presentation in the present draft are welcome for the benefit of future generations of students. In recent years. ? (adapted from ?) and ?. The present version lacks supporting materials in Mathematica or Maple. In the light of these trends. many examples and diagrams are omitted and some material may be presented in a different sequence from year to year. while economics graduates have been expected to have an increasingly strong grounding in mathematics. The increasing need for those working in economics and finance to have a strong grounding in mathematics has been highlighted by such layman’s guides as ?. In particular.ie/teaching/ma381/notes/ although it may not always be the current version. A An electronic version of this book (in LTEX) is available on the World Wide Web at. such as are provided with competing works like ?. ?.tcd. What Is Economics? This section will consist of a brief verbal introduction to economics for mathematicians and an outline of the course.
What is Mathematics? This section will have all the stuff about logic and proof and so on moved into it. based on prices and income. This gives us rates of return on risky assets.x What is economics? PREFACE 1. which are random variables. What do consumers do? They maximise ‘utility’ given a budget constraint. Microeconomics is ultimately the theory of the determination of prices by the interaction of all these decisions: all agents simultaneously maximise their objective functions subject to market clearing conditions. Finally we can try to combine 1 and 2 and 3. Then we can try to combine 2 and 3. Basic finance is about the allocation of expenditure across two or more time periods. Basic microeconomics is about the allocation of wealth or expenditure among different physical goods. 1998 . 3. This gives us the term structure of interest rates. The next step is the allocation of expenditure across (a finite number or a continuum of) states of nature. This gives us relative prices. What do firms do? They maximise profits. Thus finance is just a subset of micoreconomics. Revised: December 2. given technological constraints (and input and output prices). 2.
There is a book on proofs by Solow which should be referred to here. and N ≡ x ∈ N : xi > 0. x etc. i = 1. in particular with the importance of presenting the parts of a definition in the correct order and with the process of proving a theorem by arguing from the assumptions to the conclusions. . . i = 1. . . Revised: December 2. Proof by contradiction and proof by contrapositive are also assumed. . will denote points of or of an arbitrary vector or metric space X. 1 Insert appropriate discussion of all these topics here. X will generally denote a matrix. will denote points of n for n > 1 and x etc. .NOTATION xi NOTATION Throughout the book. is the symbol which will be used to denote the transpose of a vector or a matrix. N is used to denote the non-negative or+ ≡ thant of N . 1998 . Readers should be familiar with the symbols ∀ and ∃ and with the expressions ‘such that’ and ‘subject to’ and also with their meaning and use. .1 N x ∈ N : xi ≥ 0. . N used to denote the ++ positive orthant.
1998 .xii NOTATION Revised: December 2.
1 Part I MATHEMATICS Revised: December 2. 1998 .
.
for example when finding equilibrium price and quantity given supply and demand functions.] 1. the point (Q.1 Introduction [To be written.. • We will usually have many relationships between many economic variables defining equilibrium.CHAPTER 1. The first approach to simultaneous equations is the equation counting approach: Revised: December 2. 1998 .2 Systems of Linear Equations and Matrices Why are we interested in solving simultaneous equations? We often have to find a point which satisfies more than one equation simultaneously. • To be an equilibrium. LINEAR ALGEBRA 3 Chapter 1 LINEAR ALGEBRA 1.
1 or more points • a more precise theory is needed There are three types of elementary row operations which can be performed on a system of simultaneous equations without changing the solution(s): 1. in both the generic and linear cases: • two curves in the coordinate plane can intersect in 0. Interchange two equations Revised: December 2. unique solution: x2 + y 2 = 0 ⇒ x = 0. y = 0 – same number of equations and unknowns but no solution (dependent equations): x+y = 1 x+y = 2 – more equations than unknowns.4 1. y=1 Now consider the geometric representation of the simultaneous equation problem. 1998 ..g. 1 or more points • two surfaces in 3D coordinate space typically intersect in a curve • three surfaces in 3D coordinate space can intersect in 0. – fewer equations than unknowns. unique solution: x = y x+y = 2 x − 2y + 1 = 0 ⇒ x = 1. Add or subtract a multiple of one equation to or from another equation 2. e.2.
LINEAR ALGEBRA 5 Note that each of these operations is reversible (invertible). We end up with only one variable in the last equation. 1998 (1. 2. Our strategy. and so. roughly equating to Gaussian elimination involves using elementary row operations to perform the following steps: 1.1) ) . 4.2. 3. which is easily solved. Then we can substitute this solution in the second last equation and solve for the second last variable.CHAPTER 1.2. Check your solution!! Now..3) (1.4) (1.2. .5) 17 11 − z 6 6 . • Example: x + 2y + 3z = 6 4x + 5y + 6z = 15 7x + 8y + 10z = 25 • Solve one equation (1. Revised: December 2. 1998 (1. SYSTEMS OF LINEAR EQUATIONS AND MATRICES SIMULTANEOUS LINEAR EQUATIONS (3 × 3 EXAMPLE) • Consider the general 3D picture .6 1.2.2.2.
such as those considered above.e.) • Now the rules are – Working column by column from left to right.CHAPTER 1.4 Matrix Arithmetic • Two n × m matrices can be added and subtracted element by element. 1998 . 1.— a rectangular array of numbers. • We use the concept of the elementary matrix to summarise the elementary row operations carried out in solving the original equations: (Go through the whole solution step by step again.. • Or we can reorder the steps to give the Gaussian elimination method: column by column everywhere. • The steps taken to solve simultaneous linear equations involve only the coefficients so we can use the following shorthand to represent the system of equations used in our example: This is called a matrix.3 Matrix Operations We motivate the need for matrix algebra by using it as a shorthand for writing systems of linear equations. change the right of diagonal elements to 0 and the diagonal elements to 1 – Read off the solution from the last column. • There are three notations for the general 3×3 system of simultaneous linear equations: 1. i. LINEAR ALGEBRA 7 1. change all the below diagonal elements of the matrix to zeroes – Working row by row from bottom to top.
Subtraction is neither. MATRIX ARITHMETIC. Note that multiplication is associative but not commutative.4. Other binary matrix operations are addition and subtraction. Addition is associative and commutative. Revised: December 2. Two matrices can only be multiplied if the number of columns (i. • The scalar product of two vectors in n is the matrix product of one written as a row vector (1×n matrix) and the other written as a column vector (n×1 matrix). their product is the sum of the products of corresponding elements.. the row lengths) in the first equals the number of rows (i.’ In that case.e. So we have C = AB if and only if cij = k = 1n aik bkj . Matrices can also be multiplied by scalars.e.8 1. A row and column can only be multiplied if they are the same ‘length. • This is independent of which is written as a row and which is written as a column. 1998 . the column lengths) in the second.
we can interpret matrices in terms of linear transformations. which maps n−dimensional vectors to m−dimensional vectors. as it maps both x1 and x2 to the same image. i The additive and multiplicative identity matrices are respectively 0 and In ≡ δj . 1998 .CHAPTER 1. A matrix has an inverse if and only the corresponding linear transformation is an invertible function: • Suppose Ax = b0 does not have a unique solution. • The product of an m × n matrix and an n × 1 matrix (vector) is an m × 1 matrix (vector). A. Revised: December 2. • The product of an m × n matrix and an n × p matrix is an m × p matrix.. • Then whenever x is a solution of Ax = b: A (x + x1 − x2 ) = Ax + Ax1 − Ax2 = b + b0 − b0 = b. x1 and x2 (x1 = x2 ): Ax1 = b0 Ax2 = b0 This is the same thing as saying that the linear transformation TA is not injective. −A and A−1 are the corresponding inverse. Finally. Say it has two distinct solutions. an n×n square matrix defines a linear transformation mapping n−dimensional vectors to n−dimensional vectors. defines a function. known as a linear transformation. LINEAR ALGEBRA 9 We now move on to unary operations. different. • In particular. so x + x1 − x2 is another. solution to Ax = b. • So every m × n matrix. TA : n → m : x → Ax. Only non-singular matrices have multiplicative inverses.
sometimes denoted A or At . We can have AB = 0 even if A = 0 and B = 0.5 determinants Definition 1. • If A is not invertible.4.10 1.4. triangular and scalar matrices 1 (1. then there will be multiple solutions for some values of b and no solutions for other values of b.4 partitioned matrices Definition 1.3) And we can use Gaussian elimination in turn to solve for each of the columns of the inverse.A7?).3 orthogonal1 matrix A = A−1 .4.4. So far. say E6 E5 E4 E3 E2 E1 (A b) = (I x) . We applied the method to scalar equations (in x.4. 2.4. both using elementary row operations. We then applied it to the augmented matrix (A b) which was reduced to the augmented matrix (I x). or to solve for the whole thing at once. The transpose is A . Lots of properties of inverses are listed in MJH’s notes (p.4. 1.4.4.1) Picking out the first 3 columns on each side: E6 E5 E4 E3 E2 E1 A = I. Definition 1. (1.4. 3. skewsymmetric if A = −A. We define A−1 ≡ E6 E5 E4 E3 E2 E1 . MATRIX ARITHMETIC • So uniqueness of solution is determined by invertibility of the coefficient matrix A independent of the right hand side vector b. Each step above (about six of them depending on how things simplify) amounted to premultiplying the augmented matrix by an elementary matrix. (1. A matrix is symmetric if it is its own transpose. y and z).6 diagonal.1 orthogonal rows/columns Definition 1.2 idempotent matrix A2 = A Definition 1. we have seen two notations for solving a system of simultaneous linear equations. 1998 . Revised: December 2. −1 Note that A = (A−1 ) .2) This is what ? calls something that it seems more natural to call an orthonormal matrix. Now we introduce a third notation. Lots of strange things can happen in matrix arithmetic. Definition 1.
5. . .u ≡ u . x2 . The Cartesian product of n sets is just the set of ordered n-tuples where the ith component of each n-tuple is an element of the ith set. while a scalar is a quantity that has magnitude only. . There are lots of interesting properties of the dot product (MJH’s theorem 2). we also have the notion of a dot product or scalar product: u. xn Look at pictures of points in 2 and 3 and think about extensions to n .5. Revised: December 2. by real numbers) are defined and satisfy the following axioms: 1. Definition 1.5.5. .1 A vector is just an n × 1 matrix. such as the complex numbers.5. The distance between two vectors is just u − v . (v − u) = v 2 + u 2 −2v.CHAPTER 1.2 A real (or Euclidean) vector space is a set (of vectors) in which addition and scalar multiplication (i. matrix spaces. . .3) Two vectors are orthogonal if and only if the angle between them is zero.u = v 2 + u 2 −2 v u cos θ (1. A unit vector is defined in the obvious way .2) (1. unit norm. The ordered n-tuple (x1 .1 There are vector spaces over other fields.v ≡ u v The Euclidean norm of u is √ u. xn ) is identified with the n × 1 column vector x1 x2 . We can calculate the angle between two vectors using a geometric proof based on the cosine rule. 1998 . Another geometric interpretation is to say that a vector is an entity which has both magnitude and direction. v−u 2 = (v − u) . . LINEAR ALGEBRA 11 1. copy axioms from simms 131 notes p.1) (1. Other examples are function spaces. . On some vector spaces.e.5 Vectors and Vector Spaces Definition 1. .
7. Proof of the next result requires stuff that has not yet been covered. If a basis has n elements then any set of more than n elements is linearly dependent and any set of less than n elements doesn’t span. plus the standard basis. . 1998 . Any two non-collinear vectors in 2 form a basis. . they are linearly dependent. Otherwise. solution space. A linearly independent spanning set is a basis for the subspace which it generates. then the vectors must be linearly dependent. .1 The vectors x1 . xr ∈ and only if r i=1 n are linearly independent if αi xi = 0 ⇒ αi = 0∀i.2 Orthogonal complement Decomposition into subspace and its orthogonal complement. then they must be linearly independent.6.1 The dimension of a vector space is the (unique) number of vectors in a basis. LINEAR INDEPENDENCE A subspace is a subset of a vector space which is closed under addition and scalar multiplication. For example.7 Bases and Dimension A basis for a vector space is a set of vectors which are linearly independent and which span or generate the entire space.6 Linear Independence Definition 1. . consider row space. Give examples of each. 1. x3 . Or something like that. If r > n. If the vectors are orthonormal. Revised: December 2. Definition 1. The dimension of the vector space {0} is zero. column space. Definition 1. orthogonal complement.7.6. 1. Consider the standard bases in 2 and n . x2 .12 1.
Similarly. then the equations Ax = 0 and Bx = 0 have the same solution space. then the solution space does not contain a vector in which the corresponding entries are nonzero and all other entries are zero. 1998 m n generated by generated . Theorem 1. If a subset of columns of A are linearly dependent. In fact. Proof The idea of the proof is that performing elementary row operations on a matrix does not change either the row rank or the column rank of the matrix. Using a procedure similar to Gaussian elimination. The column rank of a matrix is the dimension of its column space.1 The row space and the column space of any matrix have the same dimension.8 Rank Definition 1. Revised: December 2. it is clear that the row rank and column rank of such a matrix are equal to each other and to the dimension of the identity matrix in the top left corner.8. then the solution space does contain a vector in which the corresponding entries are nonzero and all other entries are zero. Q. They clearly do change the column space of a matrix. every matrix can be reduced to a matrix in reduced row echelon form (a partitioned matrix with an identity matrix in the top left corner. LINEAR ALGEBRA 13 1. If A and B are row equivalent matrices. elementary row operations do not even change the row space of the matrix. By inspection. The column space of an m × n matrix A is the vector subspace of by the n columns of A.E.8. It follows that the dimension of the column space is the same for both matrices.1 The row space of an m × n matrix A is the vector subspace of the m rows of A. and zeroes in the bottom left and bottom right corner). if a subset of columns of A are linearly independent. but not the column rank as we shall now see. anything in the top right corner.D.CHAPTER 1. The first result implies that the corresponding columns or B are also linearly dependent. The second result implies that the corresponding columns of B are also linearly independent. The row rank of a matrix is the dimension of its row space.
1. and λ is the matrix with the corresponding eigenvalues along its leading diagonal. But eigenvectors are different. Revised: December 2. P−1 AP and A are said to be similar matrices.9 Eigenvalues and Eigenvectors Definition 1.9.14 Definition 1. Often it is useful to specify unit eigenvectors. Previously.3 solution space. System is consistent iff rhs is in column space of A and there is a solution. Such a solution is called a particular solution.2 dimension of row space + dimension of null space = number of columns The solution space of the system means the solution space of the homogenous equation Ax = 0.1 eigenvalues and eigenvectors and λ-eigenspaces Compute eigenvalues using det (A − λI) = 0. Prove using complex conjugate argument. So some matrices with real entries can have complex eigenvalues. then AP = Pλ so P−1 AP = λ = P AP as P is an orthogonal matrix. 1998 . Easy to show this. A general solution is obtained by adding to some particular solution a generic element of the solution space. Two similar matrices share lots of properties: determinants and eigenvalues in particular.8. Given an eigenvalue.9. we can solve any system by describing the solution space. In fact.2 rank 1.8.8. solving a system of linear equations was something we only did with non-singular square systems. the corresponding eigenvector is the solution to a singular matrix equation. So we can diagonalize a symmetric matrix in the following sense: If the columns of P are orthonormal eigenvectors of A. null space or kernel Theorem 1. all we need to be able to diagonalise in this way is for A to have n linearly independent eigenvectors. Eigenvectors of a real symmetric matrix corresponding to different eigenvalues are orthogonal (orthonormal if we normalise them). Real symmetric matrix has real eigenvalues. EIGENVALUES AND EIGENVECTORS Definition 1. Now. so one free parameter (at least). The non-homogenous equation Ax = b may or may not have solutions.
CHAPTER 1. In particular.12 Definite Matrices Definition 1.4) wi ri ] ≥ 0 ˜ i=1 a variance-covariance matrix must be real. ˜ i=1 j=1 wj rj ˜ (1.3) (1. Since vij = Cov [˜i . but this is not essential and sometimes looking at the definiteness of non-symmetric matrices is relevant.12.12. the definiteness of a symmetric matrix can be determined by checking the signs of its eigenvalues. rj ] = Cov [˜j . x = 0 positive semi-definite ⇐⇒ x Ax ≥ 0 ∀x ∈ n negative definite ⇐⇒ x Ax < 0 ∀x ∈ n .1) w Vw = i=1 j=1 wi wj Cov [˜i .11 Symmetric Matrices Symmetric matrices have a number of special properties 1. then A is positive/negative (semi-)definite if and only if P−1 AP is. LINEAR ALGEBRA 15 1. The commonest use of positive definite matrices is as the variance-covariance matrices of random variables. Other checks involve looking at the signs of the elements on the leading diagonal. Revised: December 2.2) = Cov = Var[ wi ri . rj ] r ˜ N N N (1.12. If P is an invertible n × n square matrix and A is any n × n square matrix. 1998 . Definite matrices are non-singular and singular matrices can not be definite. symmetric and positive semi-definite.12. ri ] r ˜ r ˜ and N N (1. x = 0 negative semi-definite ⇐⇒ x Ax ≤ 0 ∀x ∈ n Some texts may require that the matrix also be symmetric.10 Quadratic Forms A quadratic form is 1.12.1 An n × n square matrix A is said to be positive definite ⇐⇒ x Ax > 0 ∀x ∈ n .
4. Semi-definite matrices which are not definite have a zero eigenvalue and therefore are singular.2. it will be seen that the definiteness of a matrix is also an essential idea in the theory of convex functions. Revised: December 2.16 1.12. We will also need later the fact that the inverse of a positive (negative) definite matrix (in particular. 1998 . DEFINITE MATRICES In Theorem 3. of a variance-covariance matrix) is positive (negative) definite.
x) < }. X. x) ∀x.] 2. but no more. d(x. 1998 . y) ≥ d(x. z ∈ X. • An open ball is a subset of a metric space. VECTOR CALCULUS 17 Chapter 2 VECTOR CALCULUS 2.1 Introduction [To be written. i. • A metric space is a non-empty set X equipped with a metric. ∞) such that 1. y) = 0 ⇐⇒ x = y. Revised: December 2. The triangular inequality: d(x. 3. • A subset A of a metric space is open ⇐⇒ ∀x ∈ A. z) + d(z.CHAPTER 2. 2. y.2 Basic Topology The aim of this section is to provide sufficient introduction to topology to motivate the definitions of continuity of functions and correspondences in the next section.e. y) = d(y. y) ∀x. ∃ > 0 such that B (x) ⊆ A. a function d : X × X → [0. of the form B (x) = {y ∈ X : d(y. y ∈ X. d(x.
1 Let X = n . 1998 . Revised: December 2.3.2 A correspondence f : X → Y from a domain X to a co-domain Y is a rule which assigns to each element of X a non-empty subset of Y .2.3.2.6 The function f : X → Y is bijective (or invertible) ⇐⇒ it is both injective and surjective. such that A ⊆ B (x)).5 The function f : X → Y is surjective (onto) ⇐⇒ f (X) = Y Definition 2. We need to formally define the interior of a set before stating the separating theorem: Definition 2.2 If Z is a subset of a metric space X. Definition 2. A ⊆ X is compact ⇐⇒ A is both closed and bounded (i. Definition 2.3. denoted int Z. (Note that many sets are neither open nor closed.3. Definition 2.3 Vector-valued Functions and Functions of Several Variables Definition 2. Definition 2.1 A function (or map) f : X → Y from a domain X to a codomain Y is a rule which assigns to each element of X a unique element of Y .3 The range of the function f : X → Y is the set f (X) = {f (x) ∈ Y : x ∈ X}. then the interior of Z.4 The function f : X → Y is injective (one-to-one) ⇐⇒ f (x) = f (x ) ⇒ x = x . VECTOR-VALUED FUNCTIONS AND FUNCTIONS OF SEVERAL 18 VARIABLES • A is closed ⇐⇒ X − A is open.3.3. is defined by z ∈ int Z ⇐⇒ B (z) ⊆ Z for some > 0. Definition 2.2. 2.e. ∃x.) • A neighbourhood of x ∈ X is an open set containing x.3.
Its extension to the notion of continuity of a correspondence. ∃δ > 0 s. ? discusses various alternative but equivalent definitions of continuity. Definition 2. The notion of continuity of a function described above is probably familiar from earlier courses. Y ⊆ ) approaches the limit y ∗ as x → x∗ ⇐⇒ ∀ > 0. Such a function has N component functions. general equilibrium theory and much of microeconomics. This is usually denoted x→x∗ lim f (x) = y ∗ .3. 19 Definition 2.3. Definition 2.10 The function f : X → Y (X ⊆ n . The interested reader is referred to ? for further details. however. then f (A) ≡ {f (x) : x ∈ A} ⊆ Y and f −1 (B) ≡ {x ∈ X: f (x) ∈ B} ⊆ X.4.3. We will say that a vector-valued function is continuous if and only if each of its component functions is continuous. Revised: December 2.9 The function f : X → Y (X ⊆ n . 1998 . Definition 2.11 The function f : X → Y is continuous ⇐⇒ it is continuous at every point of its domain. we will meet it again in Theorem 3. x − x∗ < δ =⇒ |f (x) − y∗ )| < . while fundamental to consumer theory.3. ∃δ > 0 s.CHAPTER 2.8 A function of several variables is a function whose domain is a subset of a vector space. This definition just says that f is continuous provided that x→x∗ lim f (x) = f (x∗ ). Definition 2.3. VECTOR CALCULUS Note that if f : X → Y and A ⊆ X and B ⊆ Y . x − x∗ < δ =⇒ |f (x) − f (x∗ )| < .t. is probably not. Y ⊆ ) is continuous at x∗ ⇐⇒ ∀ > 0.5.t. In particular.7 A vector-valued function is a function whose co-domain is a subset of a vector space. say N .
Definition 2.4.4.h. Definition 2.4.1 The (total) derivative or Jacobean of a real-valued function of N variables is the N -dimensional row vector of its partial derivatives.Y ⊆ ) is x − x∗ < (Upper hemi-continuity basically means that the graph of the correspondence is a closed and connected set. The correspondence f : X → Y (X ⊆ (l. The Jacobean of a vector-valued function with values in M is an M × N matrix of partial derivatives whose jth row is the Jacobean of the jth component function.4 The function f : X → Y is differentiable ⇐⇒ it is differentiable at every point of its domain Definition 2.h. The correspondence f : X → Y (X ⊆ upper hemi-continuous (u. 1998 .) 2.) at x∗ ⇐⇒ for every open set N containing the set f (x∗ ). The correspondence f : X → Y (X ⊆ ⇐⇒ it is both upper hemi-continuous and lower hemi-continuous (at x∗ ) (There are a couple of pictures from ? to illustrate these definitions.t.2 The gradient of a real-valued function is the transpose of its Jacobean.c.) at x∗ ⇐⇒ for every open set N intersecting the set f (x∗ ).4. δ =⇒ f (x) ⊆ N.) n n .3.t.c.20 2.12 1.Y ⊆ ) is lower hemi-continuous x − x∗ < .4 Partial and Total Derivatives Definition 2. Revised: December 2.3 A function is said to be differentiable at x if all its partial derivatives exist at x. 3. ∃δ > 0 s.Y ⊆ ) is continuous (at x∗ ) 2.5 The Hessian matrix of a real-valued function is the (usually symmetric) square matrix of its second order partial derivatives. ∃δ > 0 s. . δ =⇒ f (x) intersects N.4. PARTIAL AND TOTAL DERIVATIVES n Definition 2.4. Definition 2.
∂xj ∂xj ∂xj k=1 ∂xk Revised: December 2. . . let x ∈ n . the second derivative (Hessian) of f is the derivative of the vector-valued function (f ) : n → n : x → (f (x)) . Thus all but one of the terms in the second summation in (2.5. 2. . 1998 . x) (x) + (g (x) .1) vanishes. Students always need to be warned about the differences in notation between the case of n = 1 and the case of n > 1. then. .CHAPTER 2. strictly speaking. VECTOR CALCULUS 21 Note that if f : n → . and define h: n → p by: h (x) ≡ f (g (x) . giving: m ∂hi ∂f i ∂g k ∂f i (x) = (g (x) .E.1 (The Chain Rule) Let g: uously differentiable functions and let h: n n → → m p and f : m → be defined by p be contin- h (x) ≡ f (g (x)) .5 The Chain Rule and Product Rule Theorem 2. Q. (2. . . One of the most common applications of the Chain Rule is the following: Let g: n → m and f : m+n → p be continuously differentiable functions. Statements and shorthands that make sense in univariate calculus must be modified for multivariate calculus. . x) (x) . Then h (x) = f (g (x)) g (x) . otherwise which is known as the Kronecker Delta. n: m m+n ∂hi ∂f i ∂g k ∂f i ∂xk (x) = (g (x) .D. p and j = 1. x) (x) + (g (x) . . ∂h The univariate Chain Rule can then be used to calculate ∂xj (x) in terms of partial derivatives of f and g for i = 1. p×n p×m m×n Proof This is easily shown using the Chain Rule for partial derivatives. .5. x) .
1×m 1×n m n×m n 1×n m n×m 2.. (g (x) .E. (x) = ∂f 1 ∂xm ∂f p ∂xm ∂f 1 ∂x1 + ∂f p ∂x1 (g (x) ... n×m n×1 1×m 1×1 n×m Proof This is easily shown using the Product Rule from univariate calculus to calculate the relevant partial derivatives and then stacking the results in matrix form.22 2. g: m → n and define h: 1×1 m → 1×n by n×1 h (x) ≡ (f (x)) g (x) . . .5. . x) . . . . x) . . . x) . ∂h1 ∂xn ∂hp ∂xn (x) . .. x) ..5. . (g (x) .5. Q. . Then h (x) = g (x) f (x) + f (x) g (x) . . x) Now. . (x) . (2.. (g (x) . .5. . x) . . .. . Revised: December 2. .2 (Product Rule for Vector Calculus) The multivariate Product Rule comes in two versions: 1. x) . . . . . . Then h (x) = (g (x)) f (x) + (f (x)) g (x) . .. x) g (x) + Dx f (g (x) .5. x) . . . ∂x1 ∂x1 ∂g 1 ∂xn ∂g m ∂xn (x) . . m ∂g (x) . x) ∂f 1 ∂xm+n ∂f p ∂xm+n ∂g1 (x) . ∂hp ∂x1 . Let f : m → and g: → n×1 and define h: 1×1 n×1 → n by h (x) ≡ f (x) g (x) . (g (x) . .5. .. . . . Let f. ∂f 1 ∂xm+1 ∂f p ∂xm+1 (g (x) . . by partitioning the total derivative of f as f (·) = Dg f (·) p×m p×(m+n) Dx f (·) .2) (g (x) . .D.3) we can use (2. . . (g (x) . 1998 .4) Theorem 2. THE CHAIN RULE AND PRODUCT RULE Stacking these scalar equations in matrix form and factoring yields: ∂h1 (x) ∂x1 . (x) (2.2) to write out the total derivative h (x) as a product of partitioned matrices: h (x) = Dg f (g (x) . . p×n (2. .
x2 . The aim is to derive an expression for the total derivative h (z∗ ) in terms of the partial derivatives of g. Proof The full proof of this theorem. Then ∃ neighbourhoods Y of y∗ and Z of z∗ . h (z) of the last n − m variables. . But we know from (2. .6. using the Chain Rule.1) We aim to solve these equations for the first m variables. xm ) is mdimensional and z = (xm+1 . . . However. 2. g (h (z) . xn ) is (n − m)-dimensional.6. . VECTOR CALCULUS 23 2. Similarly. .5. . is beyond the scope of this course. ∂g m ∂xm (x∗ ) formed by the first m columns of the total derivative of g at x∗ is non-singular. . . and a continuously differentiable function h: Z → Y such that 1. which will then be written as functions.. where m < n.CHAPTER 2. Partition the n-dimensional vector x as (y. like that of Brouwer’s Fixed Point Theorem later. . and 3. g (x∗ ) = 0m . y∗ = h (z∗ ). . z. . Consider the system of m scalar equations in n variables.1 (Implicit Function Theorem) Let g: n → m . y. . We know from part 2 that f (z) ≡ g (h (z) . Dy g ≡ . . ∂xm (x∗ ) ∂x1 . ..4) that f (z) = Dy gh (z) + Dz g. . 1998 ∀z ∈ Z. in particular at z∗ .6 The Implicit Function Theorem Theorem 2. xm+2 . z) where y = (x1 . z) = 0m Thus f (z) ≡ 0m×(n−m) ∀z ∈ Z. . h (z∗ ) = − (Dy g)−1 Dz g. Revised: December 2. partition the total derivative of g at x∗ as g (x∗ ) = [Dy g Dz g] (m × n) (m × m) (m × (n − m)) (2. and that the m × m matrix: ∂g1 ∂g 1 (x∗ ) . ∂g m ∂x1 (x∗ ) . part 3 follows easily from material in Section 2. Suppose g is continuously differentiable in a neighbourhood of x∗ .5. z) = 0 ∀z ∈ Z. .
since the statement of the theorem requires that Dy g is invertible. where λ = 0. Q.D. At (x. 4. differentiable function on (−1.7. for λ ∈ and particularly for λ ∈ [0. 1998 . in X. as required. the equation g (x. To conclude this section.1 Let X be a vector space and x = x ∈ X. √ √ We have h(y) = 1 − y 2 or h(y) = − 1 − y 2 .7. where B is an m × n matrix. The restriction of the function f : X → f |L : → : λ → f (λx + (1 − λ) x ) . 1].E.7 Directional Derivatives Definition 2. 2. } is the line from x . We have g (x) = B ∀x so the implicit function theorem applies provided the equations are linearly independent. If f is a differentiable function. Then 1. DIRECTIONAL DERIVATIVES Dy gh (z) + Dz g = 0m×(n−m) and. L = {λx + (1 − λ) x : λ ∈ where λ = 1. each of which describes a ∂g single-valued. h (z∗ ) = − (Dy g)−1 Dz g. to x. consider the following two examples: 1. the system of linear equations g (x) ≡ Bx = 0. 2. ∂x = 0 and h(y) is undefined (for y > 1) or multi-valued (for y < 1) in any neighbourhood of y = 1.24 Hence 2. 2. to the line L is the function 3. Revised: December 2. 1). then the directional derivative of f at x in the direction from x to x is f |L (0). Note that g (x. y) ≡ x2 + y 2 − 1 = 0. y) = (0. 1). λx + (1 − λ) x is called a convex combination of x and x . y) = (2x 2y).
7. it is easily shown that the second derivative of f |L is f |L (λ) = h f (x + λh)h and f |L (0) = h f (x )h. by the Chain Rule.2) (2. • Note also that. • As an exercise. f |L (λ) = f (λx + (1 − λ) x ) (x − x ) and hence the directional derivative f |L (0) = f (x ) (x − x ) .1) 1+i 1 There may be some lapses in this version. Revised: December 2. where ei is the ith standard basis vector. . VECTOR CALCULUS 25 • We will endeavour. In particular recall both the second order exact and infinite versions. wherever possible.3) • Sometimes it is neater to write x − x ≡ h. returning to first principles. 2. Readers are presumed to be familiar with single variable versions of Taylor’s Theorem. In other words.CHAPTER 2. partial derivatives are a special case of directional derivatives or directional derivatives a generalisation of partial derivatives. 1998 . Using the Chain Rule. f |L (0) = lim f (x + λ (x − x )) − f (x ) . (2. (2.1 • Note that.1) • The ith partial derivative of f at x is the directional derivative of f at x in the direction from x to x + ei . λ→0 λ (2.8. to stick to the convention that x denotes the point at which the derivative is to be evaluated and x denotes the point in the direction of which it is measured. consider the interpretation of the directional derivatives at a point in terms of the rescaling of the parameterisation of the line L. An interesting example is to approximate the discount factor using powers of the interest rate: 1 = 1 − i + i2 − i3 + i4 + .8 Taylor’s Theorem: Deterministic Version This should be fleshed out following ?.7. .7.
? is an example of a function which is not analytic.E. d db b a b f (x)dx = f (b) a 2.2) 2 Proof Let L be the line from x to x. f (x)dx = f (b) − f (a) This can be illustrated graphically using a picture illustrating the use of integration to compute the area under a curve. 1)2 such that 1 f |L (1) = f |L (0) + f |L (0) + f |L (λ).D. ∃λ ∈ (0.8. 2 (2. Q. Functions for which it does are called analytic.9. 2. Theorem 2.26 2. 1998 .9 The Fundamental Theorem of Calculus This theorem sets out the precise rules for cancelling integration and differentiation operations.1 (Fundamental Theorem of Calculus) The integration and differentiation operators are inverses in the following senses: 1.8. x ∈ X.8. 2 Should this not be the closed interval? Revised: December 2. n X ⊆ . Then the univariate version tells us that there exists λ ∈ (0. 1) such that 1 f (x) = f (x ) + f (x )(x − x ) + (x − x ) f (x + λ(x − x ))(x − x ). (2. or to f (x).3) Making the appropriate substitutions gives the multivariate version in the theorem. The (infinite) Taylor series expansion does not necessarily converge at all.9.1 (Taylor’s Theorem) Let f : X → be twice differentiable. Then for any x. Theorem 2. THE FUNDAMENTAL THEOREM OF CALCULUS We will also use two multivariate versions of Taylor’s theorem which can be obtained by applying the univariate versions to the restriction to a line of a function of n variables.
1 Definitions Definition 3. Definition 3.1 Introduction [To be written.2 Convexity and Concavity 3. y ∈ Y } . Proof The proof of this result is left as an exercise.] 3.D. Theorem 3. 1998 . λx + (1 − λ)x ∈ X.1 A subset X of a vector space is a convex set ⇐⇒ ∀x. x ∈ X.2. is also a convex set.2.1 A sum of convex sets.CHAPTER 3. CONVEXITY AND OPTIMISATION 27 Chapter 3 CONVEXITY AND OPTIMISATION 3.2. such as X + Y ≡ {x + y : x ∈ X.2. 1].2 Let f : X → Y where X is a convex subset of a real vector space and Y ⊆ .E. Q. Then Revised: December 2. λ ∈ [0.
Note that f is convex ⇐⇒ −f is concave.28 1.2. f is affine ⇐⇒ f is both convex and concave. λ ∈ (0.2. CONVEXITY AND CONCAVITY f (λx + (1 − λ)x ) ≤ λf (x) + (1 − λ)f (x ). 3. f is a convex function ⇐⇒ ∀x = x ∈ X. Revised: December 2. when λ = 0 and when λ = 1. λ ∈ [0.1) and (3.2) could also have been required to hold (equivalently) ∀x. 1] since they are satisfied as equalities ∀f when x = x .1 Note that the conditions (3. (3. λ ∈ (0. f is a concave function ⇐⇒ ∀x = x ∈ X.) 2. Definition 3. 1) 3. 1) f (λx + (1 − λ)x ) < λf (x) + (1 − λ)f (x ).2.2) A linear function is an affine function which also satisfies f (0) = 0. Then 1.3 Again let f : X → Y where X is a convex subset of a real vector space and Y ⊆ .2. 1) f (λx + (1 − λ)x ) ≥ λf (x) + (1 − λ)f (x ).1) (This just says that a function of several variables is convex if its restriction to every line segment in its domain is a convex function of one variable in the familiar sense.2. 1998 . f is a strictly convex function ⇐⇒ ∀x = x ∈ X. λ ∈ (0.2. x ∈ X. 1 (3.
f is a strictly concave function ⇐⇒ ∀x = x ∈ X. 1998 . CONVEXITY AND OPTIMISATION 2. In Definition 3. X.2 Properties of concave functions Note the connection between convexity of a function of several variables and convexity of the restrictions of that function to any line in its domain: the former is convex if and only if all the latter are. In general. X does not have to be a (real) vector space. then af + bg is concave. If a < 0. The level sets or indifference curves of f are the sets {x ∈ X : f (x) = α} (α ∈ ). . b > 0. λ ∈ (0. we will consider only concave functions. then af is convex. 1) f (λx + (1 − λ)x ) > λf (x) + (1 − λ)f (x ). The upper contour sets of f are the sets {x ∈ X : f (x) ≥ α} (α ∈ ). g} is concave The proofs of the above properties are left as exercises. Let f : X → and g : X → be concave functions. and strictly concave functions. 3.2. The lower contour sets of f are the sets {x ∈ X : f (x) ≤ α} (α ∈ ). 2. 29 Note that there is no longer any flexibility as regards allowing x = x or λ = 0 or λ = 1 in these definitions. every result derived for one has an obvious corollary for the other. 3. Note that a function on a multidimensional vector space. and vice versa. strictly convex. 3. min{f.4 Consider the real-valued function f : X → Y where Y ⊆ 1. Definition 3. and similarly for concave. If a.CHAPTER 3. is convex if and only if the restriction of the function to the line L is convex for every line L in X. Since every convex function is the mirror image of a concave function.2. 2.2. Then 1. and leave the derivation of the corollaries for convex functions as exercises.4. Revised: December 2.
and then that the strict version is necessary for strict concavity.2.2 for the shape of the indifference curves corresponding to a concave utility function. 3. We first prove that the weak version of inequality 3.2. Then: f is (strictly) concave ⇐⇒ ∀x = x ∈ X.) Proof (See ?. convex set.4). f (x) ≤ (<)f (x ) + f (x )(x − x ). the directional derivative at one point in the direction of the other exceeds the jump in the value of the function between the two points. (See Section 2.3 says that a function is concave if and only if the tangent hyperplane at any point lies completely above the graph of the function.30 3. CONVEXITY AND CONCAVITY Theorem 3.3 Convexity and differentiability In this section. Consider as an aside the two-good consumer problem.2. Concave u is a sufficient but not a necessary condition for convex upper contour sets. Revised: December 2.3) and a theorem in terms of the second derivative or Hessian (Theorem 3.2 The upper contour sets {x ∈ X : f (x) ≥ α} of a concave function are convex.E. Proof This proof is probably in a problem set somewhere. X ⊆ n an open.3 is necessary for concavity.7 for the definition of a directional derivative. namely the definition above.2. Theorem 3.) 1.2.2.] Let f : X → be differentiable. a theorem in terms of the first derivative (Theorem 3.2.2.3) Theorem 3.2. or that a function is concave if and only if for any two distinct points in the domain. Q.2. 1998 . Note in particular the implications of Theorem 3.D. x ∈ X. we show that there are a total of three ways of characterising concave functions.3 [Convexity criterion for differentiable functions. (3. Choose x.
5) Now consider the limits of both sides of this inequality as λ → 0. However.CHAPTER 3.7. We will deal with concavity. λ 31 (3. suppose that the derivative satisfies inequality (3. Then. Conversely.2.2. (3.2. applying the hypothesis of the proof in turn to x and x and to x and x yields: f (x) ≤ f (x ) + f (x )(x − x ) and f (x ) ≤ f (x ) + f (x )(x − x ) A convex combination of (3.2. Subtract f (x ) from both sides and divide by λ: f (x + λ(x − x )) − f (x ) ≥ f (x) − f (x ). 1).4) (3. 3.2. To prove the theorem for strict concavity.2.2.7) Combining these two inequalities and multiplying across by 2 gives the desired result.2.2.9) gives: λf (x) + (1 − λ)f (x ) Revised: December 2.8) (3. (b) Now suppose that f is strictly concave and x = x . Set x = λx + (1 − λ)x .2.7.5 remains a weak inequality even if f is a strictly concave function. 2.9) . f (x + λ(x − x )) ≥ f (x ) + λ (f (x) − f (x )) . 2 (3.8) and (3. as indicated.3). 1998 (3. Since f is also concave.2.4)) gives: f (x ) − f (x ) > 1 (f (x) − f (x )) . The LHS tends to f (x ) (x − x ) by definition of a directional derivative (see (2.6) Using the definition of strict concavity (or the strict version of inequality (3. just replace all the weak inequalities (≥) with strict inequalities (>).2) and (2.3) above). for λ ∈ (0. The result now follows easily for concave functions. The RHS is independent of λ and does not change. Then. we can apply the result that we have just proved to x and x ≡ 1 (x + x ) to show that 2 f (x )(x − x ) ≥ f (x ) − f (x ). CONVEXITY AND OPTIMISATION (a) Suppose that f is concave.
2.4 [Concavity criterion for twice differentiable functions. Suppose first that f (x) is negative semi-definite ∀x ∈ X. f (x) negative definite ∀x ∈ X ⇒ f is strictly concave.E.1). Then: 1. Theorem 3. A similar proof will work for negative definite Hessian and strictly concave function. we show how these arguments can be modified to give an alternative proof of sufficiency for functions of one variable. Then we use the Fundamental Theorem of Calculus (Theorem 2. (3. Proof We first use Taylor’s theorem to demonstrate the sufficiency of the condition on the Hessian matrices. Recall Taylor’s Theorem above (Theorem 2.3 shows that f is then concave. in other words for a function which is strictly concave but has a second derivative which is only negative semi-definite and not strictly negative definite.11) (3.9. 1.D. The fact that the condition in the second part of this theorem is sufficient but not necessary for concavity inspires the search for a counter-example. X ⊆ n open and convex.32 3. Q.2.10) since λ ((x − x )) + (1 − λ) ((x − x )) = λx + (1 − λ)x − x = 0n .1) and a proof by contrapositive to demonstrate the necessity of this condition in the concave case for n = 1.2. Theorem 3. It follows that f (x) ≤ f (x ) + f (x )(x − x ).2. Finally. CONVEXITY AND CONCAVITY ≤ f (x ) + f (x ) (λ ((x − x )) + (1 − λ) ((x − x ))) = f (x ).2.2. f is concave ⇐⇒ ∀x ∈ X.10) is just the definition of concavity as required. (3. The standard counterexample is given by f (x) = x2n for any integer n > 1. 1998 .] Let f : X → be twice continuously differentiable (C 2 ). the Hessian matrix f (x) is negative semidefinite. 2. Revised: December 2. Then we use this result and the Chain Rule to demonstrate necessity for n > 1.8.
and hence f is locally strictly convex on (a. Using the fundamental theorem of calculus. Revised: December 2. In other words. then f is locally strictly convex around x∗ and so cannot be concave. g has non-positive second derivative. Suppose that f is concave and fix x ∈ X and h ∈ n . Rearranging each inequality gives: f (x) > f (x ) + f (x )(x − x ) and f (x ) > f (x ) + f (x )(x − x ). CONVEXITY AND OPTIMISATION 33 2. To demonstrate necessity.) Then. where λ ∈ (0. (b) Now consider a function of several variables. b). we will prove the contrapositive. we will show that if there is any point x∗ ∈ X where the second derivative is positive. x+h argument instead of an x.CHAPTER 3. namely the restriction of f to the line segment from x in the direction from x to x + h.2.2. we must consider separately first functions of a single variable and then functions of several variables.e. Thus. As in the proof of Theorem 3. x < x and let x = λx + (1 − λ)x ∈ X.9). (a) First consider a function of a single variable. at least for sufficiently small λ.8) and (3. Then. Instead of trying to show that concavity of f implies a negative semi-definite (i. a convex combination of these inequalities reduces to f (x ) < λf (x) + (1 − λ)f (x ). b). b). Then f is an increasing function on (a.2. x argument to tie in with the definition of a negative definite matrix. say (a. using the result we have just proven for functions of one variable. So suppose f (x∗ ) > 0. b).3. 25 above that g (0) = h f (x)h. But we know from p. which are just the single variable versions of (3. 1998 . (We use an x. so f (x) is negative semi-definite. Consider two points in (a. 1). since f is twice continuously differentiable. g(λ) ≡ f (x + λh) also defines a concave function (of one variable). x f (x ) − f (x) = and f (x ) − f (x ) = x x x f (t)dt < f (x )(x − x) f (t)dt < f (x )(x − x ). f (x) > 0 for all x in some neighbourhood of x∗ . non-positive) second derivative ∀x ∈ X.
D. Note finally the implied hierarchy among different classes of functions: negative definite Hessian ⊂ strictly concave ⊂ concave = negative semidefinite Hessian. leading us to now introduce further definitions. As an exercise. b) locally convex on (a. b) (x) ≥ 0 on (a. In fact. For functions of one variable. draw a Venn diagram to illustrate these relationships (and add other classes of functions to it later on as they are introduced). we have something like the following: f f f f (x) < 0 on (a.34 3.5 A non-decreasing twice differentiable concave transformation of a twice differentiable concave function (of several variables) is also concave. b) The same results which we have demonstrated for the interval (a. CONVEXITY AND CONCAVITY 3. b) locally strictly convex on (a. b) (x) > 0 on (a. the above arguments can give an alternative proof of sufficiency which does not require Taylor’s Theorem. Q. b) ⇒ ⇒ ⇒ ⇒ f f f f locally strictly concave on (a.D. The second order condition above is reminiscent of that for optimisation and suggests that concave or convex functions will prove useful in developing theories of optimising behaviour. In fact. we adopt the convention when labelling vectors x and x that f (x ) ≤ f (x). 1998 .2 2 There may again be some lapses in this version.4 Variations on the convexity theme Let X ⊆ n be a convex set and f : X → a real-valued function defined on X. Proof The details are left as an exercise.2.2.2. b) (x) ≤ 0 on (a. Q.E. b) locally concave on (a. Revised: December 2. b) also hold for the entire domain X (which of course is also just an open interval. In order (for reasons which shall become clear in due course) to maintain consistency with earlier notation. Theorem 3. there is a wider class of useful functions. as it is an open convex subset of ). 3.E.
It is almost trivial to show the equivalence of conditions 1 and 2. Otherwise. (b) Now suppose that condition 1 holds. Let x and x ∈ X and let α = min{f (x). f (x )} ≥ α where the final inequality holds because x and x are in C(α). x ∈ X. 2. C(α) is a convex set.) ∀x.6 The following statements are equivalent to the definition of quasiconcavity: 1. 1]. ∀x = x ∈ X such that f (x ) ≤ f (x) and ∀λ ∈ (0. λx + (1 − λ)x ∈ C(α). CONVEXITY AND OPTIMISATION Definition 3. 3 Revised: December 2.5 Let C(α) = {x ∈ X : f (x) ≥ α}. we just take x and x ∈ C(α) and investigate whether λx + (1 − λ)x ∈ C(α). To show that C(α) is a convex set. by our previous result. f (λx + (1 − λ)x ) ≥ min{f (x). 1). 35 Theorem 3.2. f (λx + (1 − λ)x ) ≥ min{f (x). f (x )}. 1). for any λ ∈ (0. The statement is true for x = x or λ = 0 or λ = 1 even if f is not quasiconcave. 3.CHAPTER 3.3 (a) First suppose that the upper contour sets are convex. λ ∈ [0. (If f is differentiable. By the hypothesis of convexity. f (x )(x − x ) ≥ 0. (a) In the case where f (x) ≥ f (x ) or f (x ) = min{f (x). But.2. 1998 . f (λx + (1 − λ)x ) ≥ f (x ). We begin by showing the equivalence between the definition and condition 1. The desired result now follows. we can just reverse the labels x and x . Proof 1. This proof may need to be rearranged to reflect the choice of a different equivalent characterisation to act as definition. ∀x. f (x )}. f (x )}. Then f : X → is quasiconcave ⇐⇒ ∀α ∈ . 2. x ∈ X such that f (x) − f (x ) ≥ 0. Then x and x are in C(α). there is nothing to prove.
2.12) f (x + λ(x − x )) − f (x ) . In other words. ∃x.15) λ Since the right hand side is non-negative for small positive values of λ (λ < 1). 3. without loss of generality. f (λx + (1 − λ)x ) ≥ f (x ). We want to show that the directional derivative f |L (0) = f (x )(x − x ) ≥ 0. λ∗ such that f (λ∗ x + (1 − λ∗ )x ) < min{f (x). But (3.14) (3. (3. CONVEXITY AND CONCAVITY (b) The proof of the converse is even more straightforward and is left as an exercise. the derivative must be non-negative as required.18) Revised: December.2.2.2.2.17) f (λ∗ x + (1 − λ∗ )x ) (x − (λ∗ x + (1 − λ∗ )x)) ≥ 0. (3. Suppose the derivative satisfies the hypothesis of the theorem. but f is not quasiconcave. 1998 .2.2. (3. x and x such that f (x ) ≤ f (x). Consider f |L (λ) = f (λx + (1 − λ)x ) = f (x + λ(x − x )) . (a) Begin by supposing that f satisfies conditions 1 and 2. 1) and.16) where without loss of generality f (x ) ≤ f (x). f |L (0) = lim λ→0 (b) Now the difficult part — to prove that condition 3 is a sufficient condition for quasiconcavity.13) (3. Pick any λ ∈ (0. By quasiconcavity.36 3.2. Proving that condition 3 is equivalent to quasiconcavity for a differentiable function (by proving that it is equivalent to conditions 1 and 2) is much the trickiest part of the proof. f (x )}. Proving that condition 3 is necessary for quasiconcavity is the easier part of the proof (and appears as an exercise on one of the problem sets). x .
We can apply the same argument to any point where the value of f |L is less than f |L (0) to show that the corresponding part of the graph of f |L has zero slope. f (λx + (1 − λ)x ) > f (x ). So we have a contradiction as required.E. It might help to think about this by considering n = 1 and separating out the cases x > x and x < x .2.2. In words. or the same value at both points. part 3 of Theorem 3.. Definition 3.7 Let f : X → be quasiconcave and g : Then g ◦ f is a quasiconcave function. In particular. f |L (λ∗ ) = 0.7 for utility theory. then the directional derivative of f at x in the direction of x is non-negative.2.2.6 says that whenever a differentiable quasiconcave function has a higher value at x than at x . 1998 .D. But this is incompatible either with continuity of f |L or with the existence of a point where f |L (λ∗ ) is strictly less than f |L (0). Proof This follows easily from the previous result.2. Definition 3. Q.7 f is (strictly) quasiconvex ⇐⇒ −f is (strictly) quasiconcave4 4 EC3080 ended here for Hilary Term 1998. Note the implications of Theorem 3. The details are left as an exercise. → be increasing. This point will be considered again in a later section of the course.D. or is flat. if preferences can be represented by a quasiconcave utility function.6 f : X → is strictly quasiconcave ⇐⇒ ∀x = x ∈ X such that f (x) ≥ f (x ) and ∀λ ∈ (0.CHAPTER 3. (3. then they can be represented by a quasiconcave utility function only. Revised: December 2. Q. Theorem 3.2. 1). we already know that f |L (λ∗ ) < f |L (0) ≤ f |L (1).E.19) In other words.
f (λx + (1 − λ) x ) = λf (x) + (1 − λ)f (x ) ≥ min{f (x).2.2. We conclude this section by looking at a couple of the functions of several variables which will crop repeatedly in applications in economics later on. This function is both concave and convex.38 3. First. (3.20) (3. but not strictly so in either case. (3. f is.26) (3.2. f (x )} and (−f ) (λx + (1 − λ) x ) = λ(−f )(x) + (1 − λ)(−f )(x ) ≥ min{(−f )(x).21) so f is both quasiconcave and quasiconcave.22) (3. where M ∈ and p ∈ n .27) Finally.2.2. however. Furthermore.. CONVEXITY AND CONCAVITY Definition 3. (−f )(x )}.2.WMF files retaining their uppercase filenames and put them in the appropriate directory. Pseudoconcavity will crop up in the second order condition for equality constrained optimisation.2. Note that the last definition modifies slightly the condition in Theorem 3.25) (3.23) (3. here are two graphs of Cobb-Douglas functions:5 To see them you will have to have copied two .8 f is pseudoconcave ⇐⇒ f is differentiable and quasiconcave and f (x) − f (x ) > 0 ⇒ f (x ) (x − x ) > 0.2. C:/TCD/teaching/WWW/MA381/NOTES/! 5 Revised: December 2.2. consider the interesting case of the affine function f: n → : x → M − p x.2.2. but neither strictly concave nor strictly convex.6 which is equivalent to quasiconcavity for a differentiable function.24) (3. 1998 .
results in calculus are generally for functions on open domains.5 Graph of z = x−0. Q.3. Similarly for minima.5 y 1.5 y 0. CONVEXITY AND OPTIMISATION 39 Graph of z = x0. 1998 n at- . Proof Not given here. Theorem 3.5 3.1 A continuous real-valued function on a compact subset of tains a global maximum and a global minimum. since the limit of the first difference Revised: December 2. x = x∗ .D. f : X → . x = x∗ . f (x) ≤ (<)f (x∗ ). Also f has a (strict) local maximum at x∗ ⇐⇒ ∃ > 0 such that ∀x ∈ B (x∗ ).3 Unconstrained Optimisation Definition 3. While this is a neat result for functions on compact domains.CHAPTER 3. See ?.E.1 Let X ⊆ n . Then f has a (strict) global maximum at x∗ ⇐⇒ ∀x ∈ X.3. f (x) ≤ (< )f (x∗ ).
2).4. 2. assume that the function has a local maximum at x∗ . we deal with the unconstrained optimisation problem max f (x) (3. The results are generally presented for maximisation problems.3. and 3. whenever h < . for 0 < h < . The remainder of this section and the next two sections are each centred around three related theorems: 1.1 and 3.2 and 3.3. in this section we switch to the letter α for the former usage. It follows that.40 3.3 and 3.2 Necessary (first order) condition for unconstrained maxima and minima.3. 3. UNCONSTRAINED OPTIMISATION of the function at x.2.5.5. However. f (x∗ + hei ) − f (x∗ ) ≤ 0.1) x∈X where X ⊆ and f : X → is a real-valued function of several variables.) (It is conventional to use the letter λ both to parameterise convex combinations and as a Lagrange multiplier. called the objective function of Problem (3.) Theorem 3.3. Then f (x∗ ) = 0. say. makes no sense if the function and the first difference are not defined in some open neighbourhood of x (some B (x)).5. Then ∃ > 0 such that. Proof Without loss of generality.4. a theorem giving sufficient or second order conditions under which a solution to the first order conditions satisfies the original optimisation problem (Theorems 3. a theorem giving conditions under which a known solution to an optimisation problem is the unique solution (Theorems 3.4. 3. Let X be open and f differentiable with a local maximum or minimum at x∗ ∈ X.1).3. or f has a stationary point at x∗ . To avoid confusion. any minimisation problem is easily turned into a maximisation problem by reversing the sign of the function to be minimised and maximising the function thus obtained.3.3.4. a theorem giving necessary or first order conditions which must be satisfied by the solution to an optimisation problem (Theorems 3. 3.3). f (x∗ + h) − f (x∗ ) ≤ 0. 1998 n .1. Throughout the present section. h Revised: December 2.3.
3. 2 or. 2 Since f is continuous. 1 f (x) = f (x∗ ) + (x − x∗ ) f (x∗ + s(x − x∗ ))(x − x∗ ).CHAPTER 3. Hence.2. Theorem 3.D.3. h and hence f (x∗ + hei ) − f (x∗ ) ∂f ∗ (x ) = lim ≥ 0. ∃s ∈ (0. since the first derivative vanishes at x∗ . f (x∗ + s(x − x∗ )) will also be negative definite for x in some open neighbourhood of x∗ . Let X ⊆ n be open and let f : X → be a twice continuously differentiable function with f (x∗ ) = 0 and f (x∗ ) negative definite. for x in this neighbourhood.3. 1998 . Other methods must be used to check for possible corner solutions or boundary solutions to optimisation problems where the objective function is defined on a domain that is not open. Proof Consider the second order Taylor expansion used previously in the proof of Theorem 3. for 0 > h > − . Similarly for positive definite Hessians and local minima.4: for any x ∈ X.3.3. f (x∗ + hei ) − f (x∗ ) ≥ 0.2 applies only to functions whose domain X is open. The first order conditions are only useful for identifying optima in the interior of the domain of the objective function: Theorem 3. Then f has a strict local maximum at x∗ . f (x) < f (x∗ ) and f has a strict local maximum at x∗ . h→0 ∂xi h Similarly. Revised: December 2.2) and (3. h→0 ∂xi h 41 (3.3) yields the desired result.3.3) Combining (3. Q. CONVEXITY AND OPTIMISATION (where ei denotes the ith standard basis vector) and hence that ∂f ∗ f (x∗ + hei ) − f (x∗ ) (x ) = lim ≤ 0. 1) such that 1 f (x) = f (x∗ ) + f (x∗ )(x − x∗ ) + (x − x∗ ) f (x∗ + s(x − x∗ ))(x − x∗ ).2) (3.E.3 Sufficient (second order) condition for unconstrained maxima and minima.
then the proof breaks down.D.3 can be applied for x ∈ X and not just for x ∈ B (x∗ ). it has a point of inflexion). This is a contradiction.3. in other words that ∃x = x∗ such that f (x) = f (x∗ ).3. semi-definiteness of the Hessian matrix at x∗ is not sufficient to guarantee that f has any sort of maximum at x∗ .42 3. for example f : → : x → ex . UNCONSTRAINED OPTIMISATION Q. but not quite true. corollaries of Theorem 3. then the Hessian is negative semidefinite at x = 0 but the function does not have a local maximum there (rather. for any α ∈ (0. Note that many strictly concave and strictly convex functions will have no stationary points.3.E. 1). if f (x) = x3 .3 are: • Every stationary point of a twice continuously differentiable strictly concave function is a strict global maximum (and so there can be at most one stationary point). x∗ solves Problem (3. If 1. Proof Suppose not. 1998 .1) and 2.3. f is strictly quasiconcave (presupposing that X is a convex set). f (x∗ )} = f (x∗ ) .E.D.3. • Every stationary point of a twice continuously differentiable strictly convex function is a strict global minimum.4 Uniqueness conditions for unconstrained maximisation. Proof If the Hessian matrix is positive/negative definite everywhere. so the maximum must be unique. Revised: December 2.E.3.5 Tempting. If there are points at which the Hessian is merely semi-definite. then the argument in the proof of Theorem 3. so f does not have a maximum at either x or x∗ . Theorem 3. In other words. Q. The weak form of this result does not hold.D. then x∗ is the unique (global) maximum. f (αx + (1 − α) x∗ ) > min {f (x) . Q. For example. Then. Theorem 3.
a row of numbers separated by commas is used as shorthand for a column vector. The entire discussion here is again presented in terms of maximisation. . CONVEXITY AND OPTIMISATION 43 3. g m are all once or twice continuously differentiable. called the constraint function of Problem (3. . m. . If x∗ is a solution to Problem (3. . We will see. then there exist Lagrange multipliers. which are presented here in terms of the usual three theorems. 1998 . f : X → is a real-valued function of several variables. .4. In other words. to find the constrained optimum.4.1) where X ⊆ n . Note that 6 As usual. equivalently. there are m scalar constraints represented by a single vector constraint: g 1 (x) .4. note that the signs of the constraint function(s) can be reversed without altering the underlying problem. . that this also reverses the signs of the corresponding Lagrange multipliers. . The significance of this effect will be seen from the formal results.1).1).t. . g (x) = 0m (3. called the objective function of Problem (3. .4.CHAPTER 3.6 λ ≡ (λ1 . λ) ≡ f (x∗ ) + λ g (x∗ ) . = . but can equally be presented in terms of minimisation by reversing the sign of the objective function. we briefly review the methodology for solving constrained optimisation problems which should be familiar from introductory and intermediate economic analysis courses. we proceed as if optimising the Lagrangean: L (x. such that f (x∗ ) + λ g (x∗ ) = 0n . We will assume where appropriate that the objective function f and the m constraint functions g 1 . 0 We will introduce and motivate the Lagrange multiplier method which applies to such constrained optimisation problems with equality constraints. . . . g j : X → are real-valued functions for j = 1. Thus. however. we deal with the equality constrained optimisation problem maxx∈X f (x) s. Revised: December 2. λm ).4 Equality Constrained Optimisation: The Lagrange Multiplier Theorems Throughout this section. Before moving on to those formal results. . g m (x) 0 . Similarly.1) and g : X → m is a vector-valued function of several variables. . or. . .
λm ). this is why the constrained optimum of f corresponds to the optimum of L. λ) : x ∈ X. . i. set L (x. The last m are just the original constraints. λ) ≡ f (x) + λ g (x) . the second order conditions must be checked. Finally. 4. . . ∂xi ∂xi j=1 m m → . λ ≡ (λ1 . (the function is at a maximum or minimum Revised: December 2. Define the Lagrangean L: X × X× by L (x. Roughly speaking. m. x. Find the stationary points of the Lagrangean. g = 0 where L is optimised. where m ≡ {(x. The first n are f (x) + λ g (x) = 0 or m ∂g j ∂f (x) + λj (x) = 0 i = 1. L = f whenever g = 0 and 2. . . 1998 . .44 3. . g (x) = 0 or g j (x) = 0 j = 1. . EQUALITY CONSTRAINED OPTIMISATION: THE LAGRANGE MULTIPLIER THEOREMS 1. The first n first order or Lagrangean conditions say that the total derivative (or gradient) of f at x is a linear combination of the total derivatives (or gradients) of the constraint functions at x.4. . . λ) = 0. consider maximisation of a utility function representing CobbDouglas preferences subject to a budget constraint.e. 2. . n. this gives n + m first order conditions. . λ ∈ }. The Lagrange multiplier method involves the following four steps: 1. As an example. Since the directional derivative along a tangent to a level set or indifference curve is zero at the point of tangency. introduce the m Lagrange multipliers. 3. Consider a picture with n = 2 and m = 1. Since the Lagrangean is a function of n + m variables.
we assume that the first m columns of g (x∗ ) are linearly independent (if not. z∗ ) and. . z) = 0 ∀z ∈ Z Revised: December 2.CHAPTER 3. Proof The idea is to solve g (x∗ ) = 0 for m variables as a function of the other n − m and to substitute the solution into the objective function to give an unconstrained problem with n − m variables. If 1. Using this theorem. the Implicit Function Theorem. . We now consider the equality constrained optimisation problem in more depth. then ∃λ∗ ∈ m such that f (x∗ ) + λ∗ g (x∗ ) = 0 (i. . the level sets of f and g have a common tangent. Without loss of generality. λm to prove that f (x∗ ) is a linear combination of g 1 (x∗ ) . 1998 . using the notation of the Implicit Function Theorem. . x − x. must be perpendicular to the direction of the tangent. find a neighbourhood Z of z∗ and a function h defined on Z such that g (h (z) .1 First order (necessary) conditions for optimisation with equality constraints. so f (x) and g (x) are collinear. and 3.4. the m × n matrix g (x∗ ) = ∂g1 ∂x1 (x∗ ) . . there are no redundant constraints. ∂x1 ∂g 1 ∂xn ∂g m ∂xn (x∗ ) . then we merely relabel the variables accordingly). both in the sense that there are fewer constraints than variables and in the sense that the constraints which are present are ‘independent’). CONVEXITY AND OPTIMISATION 45 along the tangent) or f (x) (x − x) = 0. .1 above. λ must be positive if g is quasiconcave and negative if g is quasiconvex (in either case. . in n . f (x∗ ) is in the m−dimensional subspace generated by the m vectors g 1 (x∗ ) . . 2. ∂g m (x∗ ) .1 on p.4. the constraint curve is the boundary of a convex set). . or f (x) = −λg (x). g m (x∗ ).6.1) or the corresponding minimisation problem. . . At the optimum.e.6. . the gradient vector. It can also be seen with a little thought that for the solution to be a local constrained maximum. f and g are continuously differentiable. we must find the m weights λ1 . . . x∗ solves this problem (which implies that g (x∗ ) = 0). . . . .. we need Theorem 2. (x∗ ) is of rank m (i. . . Consider problem (3. Theorem 3. 2. f (x) .e. . g m (x∗ )). .. Now we can partition the vector x∗ as (y∗ . For this proof.
D. EQUALITY CONSTRAINED OPTIMISATION: THE LAGRANGE MULTIPLIER THEOREMS h (z∗ ) = − (Dy g)−1 Dz g. z) . Since x∗ solves the constrained problem maxx∈X f (x) subject to g (x) = 0. (This is easily shown using a proof by contradiction argument.2 Second order (sufficient or concavity) conditions for maximisation with equality constraints. namely F (z∗ ) = 0. it follows that z∗ solves the unconstrained problem maxz∈Z F (z). Substituting for h (z) gives: Dy f (Dy g)−1 Dz g = Dz f. Revised: December 2.) Hence. If 1. Applying the Chain Rule in exactly the same way as in the proof of the Implicit Function Theorem yields an equation which can be written in shorthand as: Dy f h (z) + Dz f = 0.4. 1998 . . z∗ satisfies the first order conditions for unconstrained maximisation of F . Now define a new objective function F : Z → by F (z) ≡ f (h (z) . f and g are differentiable. Theorem 3.E.4.46 and also 3. Q.
. But the first order condition guarantees that the LHS of this inequality is zero (not positive). so f (x∗ ) (x − x∗ ) + λ∗ g (x∗ ) (x − x∗ ) > 0. Revised: December 2.CHAPTER 3.D. . which is the required contradiction. all the Lagrange multipliers are non-negative. m.2. 1998 . we have g (x∗ ) = g (x) = 0. Since the constraints are satisfied at both x and x∗ . ∃x = x∗ such that g (x) = 0 but f (x) > f (x∗ ).E. λ∗ ≥ 0 for j = 1. and 3. f (x) − f (x∗ ) > 0 implies that f (x∗ ) (x − x∗ ) > 0.6). Theorem 3. g j is quasiconcave for j = 1. the first order conditions are satisfied at x∗ ). We will derive a contradiction. . f (x∗ ) + λ∗ g (x∗ ) = 0 (i. . and 5. . By assumption. 2. both convex and concave) for j = 1. f is pseudoconcave. f is strictly quasiconcave. . . If 1. CONVEXITY AND OPTIMISATION 47 2. Proof Suppose that the second order conditions are satisfied. 3. Since x∗ is not a maximum. j 4.e.4. By quasiconcavity of the constraint functions (see Theorem 3. . m. Q. but that x∗ is not a constrained maximum. . . m.3 Uniqueness condition for equality constrained maximisation. x∗ is a solution.e. . then x∗ solves the constrained maximisation problem. . It should be clear that non-positive Lagrange multipliers and quasiconvex constraint functions can take the place of non-negative Lagrange multipliers and quasiconcave Lagrange multipliers to give an alternative set of second order conditions. then x∗ is the unique (global) maximum. g j is an affine function (i. g j (x)−g j (x∗ ) = 0 implies that g j (x∗ ) (x − x∗ ) ≥ 0. By pseudoconcavity. Rearranging yields: f (x∗ ) + λ∗ g (x∗ ) (x − x∗ ) > 0.
evaluated at the optimal value of x. f : n+q → and g: n+q → m (i.4. 1998 . as usual f is the real-valued objective function and g is a vector of m real-valued constraint functions.∈ q . In other words. We conclude this section with Theorem 3. α) = 0.4 (Envelope Theorem. EQUALITY CONSTRAINED OPTIMISATION: THE LAGRANGE MULTIPLIER THEOREMS Proof The uniqueness result is also proved by contradiction. Note that it does not require any differentiability assumption. but either or both can depend on exogenous or control variables α as well as on the endogenous or choice variables x). Q. xα also satisfies the constraints.E.2) where x ∈ n . Consider the convex combination of these two solutions xα ≡ αx+(1 − α) x∗ . it must be the case that f (xα) > f (x∗ ). f + λ g. Since each g j is affine and g j (x∗ ) = g j (x) = 0. Suppose that the standard conditions for application of the Lagrange multiplier theorems are satisfied. on the vector α should be ignored in calculating the last-mentioned partial derivative. Proof The Envelope Theorem can be proved in the following steps: Revised: December 2. • We first show that the feasible set is convex.4. Suppose x = x∗ are two distinct solutions.4.) Consider the modified constrained optimisation problem: max f (x. The construction of the obvious corollaries for minimisation problems is left as an exercise.e. Then the partial derivative of M with respect to αi is just the partial derivative of the relevant Lagrangean. (3.D.48 3. The dependence of the vector of Lagrange multipliers. α) x subject to g (x. • To complete the proof. with respect to αi . λ. we find the required contradiction: Since f is strictly quasiconcave and f (x∗ ) = f (x).
so that x∗ is the unique optimal solution to the equality constrained optimisation problem. α) + λ Dα g (x∗ (α) . 2.5.D.4) to derive an expression for the partial derivatives terms of the partial derivatives of f and x∗ : ∂M ∂αi 49 of M in M (α) = Dx f (x∗ (α) . 4. 3. α) = −λ Dx g (x∗ (α) . Finally. x∗ satisfies the first order conditions with each λi ≥ 0. α) x∗ (α) + Dα g (x∗ (α) . α) and allow us to eliminate the ∂f ∂xi terms from this expression. Write down the identity relating the functions M . In applications in economics. Apply (2. α) x∗ (α) + Dα f (x∗ (α) . 1998 ∂g ∂xi terms from your new expression . α) . α) 2. use this result to eliminate the for ∂M .CHAPTER 3. ∂αi Combining all these results gives: M (α) = Dα f (x∗ (α) . and 3. α) . f and x∗ : M (α) ≡ f (x∗ (α) . α) = 0m×q . the Hessian f (x∗ ) is a negative definite (n × n) matrix. the most frequently encountered applications will make sufficient assumptions to guarantee that 1. The first order (necessary) conditions for constrained optimisation say that Dx f (x∗ (α) .4) again to the identity g (x∗ . α) = 0m to obtain Dx g (x∗ (α) . Revised: December 2. Q. g is a linear function. which is the required result. Use (2.E.5. CONVEXITY AND OPTIMISATION 1.
4) ∂αi ∂αi Thus • when λi = 0.5. λ) = f (x) + λ (α − h (x)) . we need some graphical motivation concerning the interpretation of Lagrange multipliers. i = 1.5.3) (3. g i (x) ≥ 0. • when λi < 0. The Lagrangean is L (x. f : X → is a real-valued function of several variables. . the envelope function is at its maximum.5. Before presenting the usual theorems formally. the envelope function is decreasing • when λi > 0.1) and g : X → m is a vector-valued function of several variables.5. the envelope function is increasing Now consider how the nature of the inequality constraint change as αi increases (as illustrated. m (3.5. or the objective function at its unconstrained maximum. . so that the relationship between αi and λi is negative) hi (x) ≤ αi Revised: December 2.5. (3. . (3. α) = α − h (x) .5 Inequality Constrained Optimisation: The Kuhn-Tucker Theorems Throughout this section. called the objective function of Problem (3.2) Thus. we deal with the inequality constrained optimisation problem maxx∈X f (x) s.50 3.5. called the constraint function of Problem (3.1) where once again X ⊆ n .5) . using the Envelope Theorem. 1998 (3. Suppose that the constraint functions are given by g (x.1). . assuming f quasiconcave as usual and hi quasiconvex or g i quasiconcave. 2. INEQUALITY CONSTRAINED OPTIMISATION: THE KUHN-TUCKER THEOREMS 3. it is easily seen that the rate of change of the envelope function M (α) with respect to the ‘level’ of the ith underlying constraint function hi is: ∂M ∂L = = λi .t.5. and the constraint is not binding.
. x They can be expressed as: f (x∗ ) ≤ 0 f (x∗ ) = 0 if x > 0 (3. i = b + 1. For values of αi such that λi > 0. . 2. x ≥ 0. the signs were important only when dealing with second order conditions). (For equality constrained optimisation. For values of αi such that λi < 0. . (3. . this constraint is non-binding.5. . For values of αi such that λi = 0. with g i (x∗ ) = 0. Theorem 3. Multiplier fn. Thus we will find that part of the necessity conditions below is that the Lagrange multipliers be non-negative. m (in other words. Note that in this setup.1: Sign conditions for inequality constrained optimisation (or g i (x. CONVEXITY AND OPTIMISATION Derivative of Lagrange Constraint objective fn. the first b constraints are binding (active) at x∗ and the last n − b are non-binding (inactive) at x∗ . f ≤0 λ≥0 g=0 f =0 λ=0 g>0 51 Type of constraint Binding/active Non-binding/inactive Table 3. 1998 .7) (3.5. x∗ solves Problem (3. . and constant beyond it).1. renumbering the constraints if necessary to achieve this). .6) 2 ∂αi ∂αi so that the envelope function is strictly concave (up to the unconstrained optimum. α) ≥ 0).8) The various sign conditions which we have looked at are summarised in Table 3.t. Revised: December 2. If 1. this constraint is strictly binding. Consider also at this stage the first order conditions for maximisation of a function of one variable subject to a non-negativity constraint: max f (x) s.CHAPTER 3. .5.1). ∂2M ∂λi = < 0. this constraint is just binding. b and g i (x∗ ) > 0. i = 1.1 Necessary (first order) conditions for optimisation with inequality constraints.5.5.
It can be broken into seven steps.t. . . . we can ignore them if we confine our search for a maximum to this neighbourhood. . Since x∗ solves Problem (3. there are no redundant binding constraints. i. . . . Proof The proof is similar to that of Theorem 3. it also solves the following problem: s. . i = 1. one corresponding to each binding constraint. 1998 (x∗ ).s∈ b + f (x) .e. Suppose x∗ solves Problem (3. i = b + 1. . 2. . . . We now introduce slack variables s ≡ (s1 . .4.e.5. and 3. .1) by assumption. since the non-binding constraints are non-binding ∀x ∈ B (x∗ ) by construction. (3. . .. We will return to the non-binding constraints in the very last step of this proof. INEQUALITY CONSTRAINED OPTIMISATION: THE KUHN-TUCKER THEOREMS 2. b. 2.10) In other words. (3. . and consider the following equality constrained maximisation problem: maxx∈B Revised: December 2. . ∂g 1 ∂x1 ∂g b ∂x1 (x∗ ) . . 2. 1. . ∂g 1 ∂xn ∂g b ∂xn (x∗ ) ..1).9) Such a neighbourhood exists since the constraint functions are continuous. f and g are continuously differentiable. then ∃λ ∈ m such that f (x∗ ) + λ g (x∗ ) = 0. but until then g will be taken to refer to the vector of b binding constraint functions only and λ to the vector of b Kuhn-Tucker multipliers corresponding to these binding constraints. with λi ≥ 0 for i = 1. m. m and g i (x∗ ) = 0 if λi > 0. (x∗ ) is of full rank b (i. sb ). maxx∈B (x∗ ) f (x) g i (x) ≥ 0.5. .1 for the equality constrained case.5. the b × n submatrix of g (x∗ ).. . . We begin by restricting attention to a neighbourhood B (x∗ ) throughout which the non-binding constraints remain non-binding. . .52 3.. both in the sense that there are fewer binding constraints than variables and in the sense that the constraints which are binding are ‘independent’).5.5. g i (x) > 0 ∀x ∈ B (x∗ ) . (x∗ ) .
and correspondingly partition the matrix of partial derivatives evaluated at the optimum: G (y∗ . b. (3. s) ≡ f (h (z.15) (3.5.5.5.11) as in the Lagrange case. For consistency of notation. we use the Implicit Function Theorem to solve the system of b equations in n + b unknowns.5. s∗ ) = G (x∗ .12) for the first b variables in terms of the last n.s G.21) Revised: December 2. s) ≡ g i (x) − si .18) can in turn be partitioned to yield Dz h = − (Dy g)−1 Dz G = − (Dy g)−1 Dz g and Ds h = − (Dy g)−1 Ds G = (Dy g)−1 Ib = (Dy g)−1 . we define s∗ ≡ 0b .5. This solution can be substituted into the original objective function f to create a new objective function F defined by F (z. we partition the vector of choice and slack variables three ways: (x.20) (3. CONVEXITY AND OPTIMISATION s. .11) Since x∗ solves Problem (3.10) and all b constraints in that problem are binding at x∗ . (3. 2.18) 4. s) is a solution to G (y.13) where y ∈ b and z ∈ n−b . To do this. it can be seen that (x∗ .t.5. 3. s) = 0b . 0b ) solves this new problem. z∗ . (3.19) (3. s) (3. s∗ ) = = = Dy G Dz.5. .5.CHAPTER 3. s∗ ) = − (Dy g)−1 Dz. G (x.5.5.5. . 1998 . s) = 0 with h (z∗ . s) . We proceed with Problem (3.5. (3. z. z. . s) ≡ (y.16) (3. G (x. z) (3.17) The rank condition allows us to apply the Implicit Function Theorem and to find a function h : n −→ b such that y = h (z. 53 (3.5. i = 1.5. s) = 0b where Gi (x. In other words.14) (3.s G Dy G Dz G Ds G Dy g Dz g −Ib .
22) are just that the partial derivatives of F with respect to the remaining n − b choice variables equal zero (according to the first order conditions for unconstrained optimisation). s) (3.s∈ b + F (z.25) (3.23).5.5.5.5. Revised: December 2. We know that Dz F = Dy f Dz h + Dz f In−b = 0n−b .23) 6. 5. The Kuhn-Tucker multipliers can now be found exactly as in the Lagrange case. 0b solves Problem (3. INEQUALITY CONSTRAINED OPTIMISATION: THE KUHN-TUCKER THEOREMS and another new maximisation problem where there are only (implicit) nonnegativity constraints: maxz∈B (z∗ ).22). This can be seen by differentiating both sides of (3. Substituting for Dz h from (3. by λ ≡ −Dy f (Dy g)−1 .22) It should be clear that z∗ .5. The first order conditions for Problem (3.21) with respect to s to obtain: Ds F = Dy f Ds h + Dz f 0(n−b)×b = Dy f (Dy g) = −λ . λ .5.54 3.26) .20) and (3. Next. 1998 −1 (3.24) (3. where we have used (3.5.5.5.5.19) gives: Dy f (Dy g)−1 Dz g = Dz f. we calculate the partial derivatives of F with respect to the slack variables and show that they can be less than or equal to zero if and only if the Kuhn-Tucker multipliers corresponding to the binding constraints are greater than or equal to zero. (3.5.5. while the partial derivatives of F with respect to the b slack variables must be less than or equal to zero..
e. 3. Q. . . CONVEXITY AND OPTIMISATION 55 7. b. g j is a quasiconcave function for j = 1. 2.5. with λi ≥ 0 for i = 1. g i (x∗ ) = 0. and 4. f and g are differentiable.27) from where it is virtually identical to that for the Lagrange case. x∗ is a solution.D. 2. . . .5. i = 1. m. b (i.e. Theorem 3. i = b + 1. . . m and g i (x∗ ) = 0 if λi > 0 (i. Revised: December 2. Proof The proof just requires the first order conditions to be reduced to b f (x∗ ) + i=1 λi g i (x∗ ) = 0 (3. . Finally just set the Kuhn-Tucker multipliers corresponding to the non-binding constraints equal to zero. 2. m and g j is quasiconcave for j = 1. If 1. . . 1998 . .E. . . . f is pseudoconcave.E. and so it is left as an exercise. . then x∗ solves the constrained maximisation problem. the first order conditions are satisfied at x∗ ).2 Second order (sufficient or concavity) conditions for optimisation with inequality constraints. . f is strictly quasiconcave. the binding constraint functions are quasiconcave). Theorem 3.3 Uniqueness condition for inequality constrained optimisation. 2. If 1. g i (x∗ ) > 0. . then x∗ is the unique (global) optimal solution. ∃λ ∈ m such that f (x∗ )+λ g (x∗ ) = 0.CHAPTER 3.5.D. . . and 3. . Q.
along with some of the more technical material on continuity of (multi-valued) correspondences. Let x∗ (α) denote the optimal choice of x for given α (x∗ : q → M (α) denote the maximum value attainable by f for given α (M : If 1. 1998 .28) ) and let → ).D. continuous correspondence of α.5. The last important result on optimisation. and 2. Theorem 3.4 (Theorem of the maximum) Consider the modified inequality constrained optimisation problem: max f (x.5. or the intersection of m convex sets.5. and hence is continuous if it is a continuous (single-valued) function. . Q. f : n+q → and g: n+q → m . INEQUALITY CONSTRAINED OPTIMISATION: THE KUHN-TUCKER THEOREMS Proof The proof is again similar to that for the Lagrange case and is left as an exercise. α) x subject to g i (x. i = 1.4 will be used in consumer theory to prove such critical results as the continuity of demand functions derived from the maximisation of continuous utility functions.56 3. 2. The following are two frequently encountered examples illustrating the use of the Kuhn-Tucker theorems in economics. . Before proceeding to the statement of the theorem. α) ≥ 0. the constraint set is a non-empty.D. . and q 3. The point to note this time is that the feasible set with equality constraints was convex if the constraint functions were affine. then 1. is closely related to the Envelope Theorem. Proof The proof of this theorem is omitted.12. α∈ q .3.E. where x ∈ n .7 7 The calculations should be left as exercises. f is continuous.5. the reader may want to review Definition 2. x∗ is an upper hemi-continuous correspondence. the Theorem of the Maximum. whereas the feasible set with inequality constraints is convex if the constraint functions are quasiconcave. Q. M is a continuous (single-valued) function. Theorem 3.E. the range of f is closed and bounded. This is because the feasible set (where all the inequality constraints are satisfied simultaneously) is the intersection of m upper contour sets of quasiconcave functions. Revised: December 2. . m n (3. compact-valued.
5. 2 (3. .5.29) (3. . then we will have: 1 α = − GA−1 G λ. since as x Ax is a scalar. 2 If the constraints are binding.5. The objective function can always be rewritten as a quadratic form in a symmetric (negative definite) matrixl. An easy fix is to let the Kuhn-Tucker multipliers be defined by: λ∗ ≡ max 0m .5.5. . −2 GA−1 G −1 α . The first order conditions are: 2x A + λ G = 0n or.33) Now we need the fact that G (and hence GA−1 G ) has full rank to solve for the Lagrange multipliers λ: λ = −2 GA−1 G −1 α.35) (3. 57 Find the vector x ∈ n which maximises the value of the quadratic form x Ax subject to the m linear inequality constraints gi x ≥ αi .37) Now the sign conditions tell us that each component of λ must be nonnegative. x Ax = x Ax (3.32) = x A x 1 = x Ax + x A x 2 1 = x A+A x 2 and 1 2 A+A is always symmetric. The Lagrangean is: x Ax + λ (Gx − α) . transposing and multiplying across by 1 A−1 : 2 1 x = − A−1 G λ. Let G be the m × n matrix whose ith row is gi .CHAPTER 3.5.38) Revised: December 2. The canonical quadratic programming problem. 1998 . G must have full rank if we are to apply the Kuhn-Tucker conditions.36) (3. (3.34) (3.30) (3.5. where A ∈ n×n is negative definite and gi ∈ n for i = 1.5. (3.5.31) (3.5. . CONVEXITY AND OPTIMISATION 1. m.
38).1) x Revised: December 2.5. Maximising a Cobb-Douglas utility function subject to a budget constraint and non-negativity constraints. The effect of this is to knock out the non-binding constraints (those with negative Lagrange multipliers) from the original problem and the subsequent analysis. g : X → be. the solution is: x = A−1 G GA−1 G −1 α (3.5. where the parameters are reinterpreted as time discount factors.35) the value of λ∗ from (3. respectively. and intertemporal choice with log utility.6. In the case in which all the constraints are binding.6. 1998 . The applications of this problem will include choice under certainty.6 Duality Let X ⊆ n and let f.5.t. Consider the envelope functions defined by the dual families of problems: M (α) ≡ max f (x) s. 2 (3. the extension to Stone-Geary preferences. choice under uncertainty with log utility where the parameters are reinterpreted as probabilities. 2.39) and the envelope function is given by: x Ax = α = α GA−1 G GA−1 G −1 −1 GA−1 AA−1 G α GA−1 G −1 α 1 = − α λ.58 3. and it is to the question of duality that we will turn in the next section. g (x) ≤ α (3. Further exercises consider the duals of each of the forgoing problems.5. 3. We can now find the optimal x by substituting for λ in (3. pseudoconcave and pseudoconvex functions. DUALITY where the max operator denotes component-by-component maximisation.40) The applications of this problem will include ordinary least squares and generalised least squares regression and the mean-variance portfolio choice problem in finance.
But we know that f (x) = M (α) . x 59 (3. The first order conditions for the two problems are respectively: f (x) − λg (x) = 0 and g (x) + µf (x) = 0.6. f (x) ≥ β. Thus if x and λ∗ = 0 solve (3. (3. i.3).6. x† (β) = x∗ (N (β)) . say x∗ (α) and x† (β) respectively. then x and µ∗ ≡ −1/λ solve (3. the envelope functions for the two dual problems are inverse functions (over any range where the Lagrange multipliers are non-zero. either α or β or indeed λ or µ can be used to parameterise either family of problems.8) In other words.CHAPTER 3. 1998 .5) (3.3) Revised: December 2. duality will be covered further in the context of its applications to consumer theory in Section 4.6.2) Suppose that these problems have solutions. or f (x) = β. This allows us to conclude that: x∗ (α) = x† (M (α)) . (3. and that the constraints bind at these points.2). it must also satisfy the constraint. CONVEXITY AND OPTIMISATION and N (β) ≡ min g (x) s.e. Similarly.6.6.4). where the constraints are binding).1) to also solve (3.4) where λ and µ are the relevant Lagrange multipliers. In particular.7) (3. (3.6.6. However. Combining these equations leads to the conclusion that α = N (M (α)) and β = M (N (β)) .t. We will see many examples of these principles in the applications in the next part of the book.6.6.6.6. for the x which solves the original problem (3. Thus.6) (3.6.
DUALITY Revised: December 2.60 3. 1998 .6.
1998 .61 Part II APPLICATIONS Revised: December 2.
.
] 4. merely. shares in the profit or loss of firms. in which households are further indirectly endowed with. • N goods or commodities.CHAPTER 4. there is no production. 1 Revised: December 2. individuals. indexed by the superscript n. Economies of these types comprise: • H households or consumers or agents or (later) investors or. 1998 . which can use part of the initial aggregate endowment as inputs to production processes whose outputs are also available for trade and consumption. and can trade. but there are no firms. and • a production economy. in which households are endowed directly with goods.1 Introduction [To be written. indexed by the subscript1 h. CHOICE UNDER CERTAINTY 63 Chapter 4 CHOICE UNDER CERTAINTY 4. Notation in what follows is probably far from consistent in regard to superscripts and subscripts and in regard to ith or nth good and needs fixing. indexed by f . and • (in the case of a production economy only) F firms.2 Definitions There are two possible types of economy which we could analyse: • a pure exchange economy. and economic activity consists solely of pure exchanges of an initial aggregate endowment.
although this is not essential. e. A consumer’s net demand is denoted zh ≡ xh − eh ∈ N . If (x. y) ∈ R. The theory of optimal production decisions and of equilibrium in an economy with production is mathematically similar. 1998 . The following properties of a general relation R on a general set X are often of interest: Revised: December 2. and rule out points of N not meeting + this requirement.g. we usually just write xRy. Each consumer is assumed to have a preference relation or (weak) preference ordering which is a binary relation on the consumption set Xh (?.) • by the state of the world in which they are consumed. the shareholdings of households in firms are denoted ch ∈ F . consumer h’s consumption set might require a certain subsistence consumption of some commodities.g. . Similarly.64 4. Since each household will have different preferences. y) where x ∈ X and y ∈ X . e. .) The important characteristics of household h are that it is faced with the choice of a consumption vector or consumption plan or consumption bundle. apples or oranges. Typically. Xh . (Typically. X . that x is at least as good as y). consumption can be spread out over many periods. xh = x1 . Recall (see ?) that a binary relation R on X is just a subset R of X × X or a collection of pairs (x. the state of the world can be any point ω in a sample space Ω. xN . e. DEFINITIONS This chapter concentrates on the theory of optimal consumer choice and of equilibrium in a pure exchange economy. In a production economy.e. • by the time at which they are consumed. an Easter egg delivered before Easter Sunday or an Easter egg delivered after Easter Sunday. . . by intrinsic physical characteristics. the service provided by an umbrella on a wet day or the service which it provides on a dry day. h h from a (closed. The household’s endowments are denoted eh ∈ N and can be + traded. Chapter 7). (While all trading takes place simultaneously in the model. More gener+ ally.2. such as water. but the subscript will be omitted for the time being while we consider a single household. we will assume for the time being that each household chooses from the same consumption set. Goods can be distinguished from each other in many ways: • obviously. Thus x y means that either x is preferred to y or the consumer is indifferent between the two (i. convex) consumption set. Xh = N . we should really denote household h’s preference relation h .g.
Revised: December 2. If X is a countable set. then f ◦ u also represents . A relation R is symmetric ⇐⇒ xRy ⇒ yRx 3. just write out the consumption plans in X in order of preference. then there exists a utility function representing any preference relation on X . from every preference relation: 1. x ∼ y means x The utility function u : X → represents the preference relation y. since f (u(x)) ≥ f (u(y)) ⇐⇒ u(x) ≥ u(y) ⇐⇒ x y. ∼. and assign numbers to them. can be derived 2. If X is an uncountable set. then there may not exist a utility function representing every preference relation on X . CHOICE UNDER CERTAINTY 1. A relation R is transitive ⇐⇒ xRy. assigning the same number to any two or more consumption plans between which the consumer is indifferent. if x . yRz =⇒ xRz 4. A relation R is reflexive ⇐⇒ xRx ∀x ∈ X 2. y ∈ X either xRy or yRx (or both) (in other words a complete relation orders the whole set) An indifference relation. u(x) ≥ u(y) ⇐⇒ x If f : → is a monotonic increasing function and u represents the preference relation . A relation R is complete ⇐⇒ 65 ∀x. To prove this.CHAPTER 4. 1998 . and a strict preference relation. x y means x y but not y y and y x.
and so on. whose proof used Axioms 1–4 only (see ? or ?) and by Hal Varian. the consumption plan y lies in the lower contour set Wx∗ but B (y) never lies completely in Wx∗ for any . After the definition of each axiom. Reflexivity means that each bundle is at least as good as itself. more of commodity 2 if faced with a choice between two consumption plans having the same amount of commodity 1. (Note that symmetry would not be a very sensible axiom!) Section 5. Axiom 4 (Continuity) The preference relation is continuous i.3 Axioms We now consider six axioms which it are frequently assumed to be satisfied by preference relations when considering consumer choice under certainty.g. we will give a brief rationale for its use. and Wy . Thus. Revised: December 2. the set of consumption plans which are better than or as good as y. Transitivity means that preferences are rational and consistent. whose proof was simpler by virtue of adding an additional axiom (see ?). AXIOMS 4. Theorems on the existence of continuous utility functions have been proven by Gerard Debreu. and upper contour sets are not closed. 1998 . Completeness means that the consumer is never agnostic. if such exist.e. A consumer with such preference prefers more of commodity 1 regardless of the quantities of other commodities. for all consumption plans y ∈ X the sets By ≡ {x ∈ X : x y} and Wy = {x ∈ X : y x} are closed sets. consider lexicographic preferences: Lexicographic preferences violate the continuity axiom. the set of consumption plans which are worse than or as good as y.3. Axiom 1 (Completeness) A (weak) preference relation is complete.5. Nobel laureate.66 4. are just the upper contour sets and lower contour sets respectively of utility functions. In the picture. Axiom 3 (Transitivity) A (weak) preference relation is transitive. Axiom 2 (Reflexivity) A (weak) preference relation is reflexive. Consider the picture when N = 2: We will see shortly that By . E.1 will consider further axioms that are often added to simplify the analysis of consumer choice under uncertainty. lower contour sets are not open.
but not the continuity. xN ) &c. Proof For the proof of this theorem. Now x y ⇐⇒ ⇐⇒ u(x)1 u(y)1 u(x) ≥ u(y) where the first equivalence follows from transitivity of preferences and the second from strong monotonicity.1 (Debreu) If X is a closed and convex set and is a complete. x x The strong monotonicity axiom is a much stronger restriction on preferences than local non-satiation. . and by completeness. then is said to be strongly monotonic iff whenever xn ≥ yn ∀n + but x = y.2 (Varian) If X = N and is a complete. reflexive. The idea is that the utility of x is the multiple of the benchmark consumption plan to which x is indifferent. .97)) Pick a benchmark consumption plan. ∃x ∈ B (x) s. 1). + continuous and strongly monotonic preference relation on X . .]. 1998 . By connectedness of . Strong monotonicity: If X = N . Revised: December 2.t.E.CHAPTER 4. [x = (x1 . . CHOICE UNDER CERTAINTY 67 Theorem 4. transitive and continuous preference relation on X . x y. they intersect in at least one point. the sets {t ∈ : t1 x} and {t ∈ : x t1} are both non-empty. they cover .3. reflexive. We will prove the existence. however. By continuity of preferences.g. then ∃ a continuous utility function u: X → representing . 1 ≡ (1. then ∃ a continuous utility function u: X → representing . both are closed (intersection of a ray through the origin and a closed set). Proof (of existence only (?. . or 2. . p. it greatly simplifies the proof of existence of utility functions. The assumption that preferences are reflexive is not used in establishing existence of the utility function. Q. Local non-satiation: ∀x ∈ X .D. part of the following weaker theorem.3. Theorem 4. . see ?. transitive. e. 1. By strong monotonicity. > 0. . Axiom 5 (Greed) Greed is incorporated into consumer behaviour by assuming either 1. and x ∼ u(x)1. so it can be inferred that it is required to establish continuity. u(x) say.
Notwithstanding this. which also represents the same preferences.E.3.68 4. Strict convexity: The preference relation x is strictly convex ⇐⇒ y =⇒ λx + (1 − λ) y y. In other words. concavity of a utility function is a property of the particular representation and not of the underlying preferences. So we usually assume differentiability. Theorem 4. Axiom 6 (Convexity) There are two versions of this axiom: 1. The difference between the two versions of the convexity axiom basically amounts to ruling out linear segments in indifference curves in the strict case. is not necessarily a concave function (unless f itself is a convex function). If the utility function is differentiable.1) . each of which relates to the preference relation itself and not to the particular utility function chosen to represent it. as indicated by the use of one or other of the following axioms.3 The preference relation is (strictly) convex if and only if every utility function representing is a (strictly) quasiconcave function. Proof In either case. If u is a concave utility function. we can go further and use calculus to find the maximum.D. both statements are equivalent to saying that u(x) ≥ u(y) =⇒ u (λx + (1 − λ) y) ≥ (>)u(y).3.3. is convex ⇐⇒ y =⇒ λx + (1 − λ) y y. We know that an optimal choice will exist if the utility function is continuous and the budget set is closed and bounded. then f ◦u. Convexity: The preference relation x 2. The rule which the consumer will follow is to choose the most preferred bundle from the set of affordable alternatives (budget set). Revised: December 2. in other words the bundle at which the utility function is maximised subject to the budget constraint. AXIOMS Q. 1998 (4. if one exists. convexity of preferences is important.
t.e..4 Optimal Response Functions: Marshallian and Hicksian Demand 4.3.4.CHAPTER 4. and in particular the distinction between pure exchange and production economy is irrelevant. income can be represented by M in either case. 4.1) where Π (p) is the vector of the F firms’ maximised profits when prices are p. it just says that indifference curves may be asymptotic to the axes but never reach them.4. It is not easy to see how to express it in terms of the underlying preference relation so perhaps it cannot be elevated to the status of an axiom. CHOICE UNDER CERTAINTY 69 Q. 1998 . endowment vector eh ∈ Xh .. From a mathematical point of view. Constraining x to lie in the consumption set normally just means imposing nonnegativity constraints on the problem.E. Revised: December 2. Intuitively. p x ≤ p eh + ch Π (p) ≡ Mh (4. the source of income is irrelevant. indifference curves are convex to the origin for convex preferences).D. shareholdings ch ∈ F and preference ordering h represented by utility function uh who desires to trade his endowment at prices p ∈ N faces an inequality constrained + optimisation problem: x∈Xh max uh (x) s.1 The consumer’s problem A consumer with consumption set Xh . It can be seen that convexity of preferences is a generalisation of the two-good assumption of a diminishing marginal rate of substitution. Thus. Theorem 4.
ch ) or xh (p. the Kuhn-Tucker theorem on second order conditions (Theorem 3. We also have µ = 0N unless one of the non-negativity constraints is binding: Axiom 7 would rule out this possibility. and even ++ (see below) free goods).2) The first order conditions are given by the (N -dimensional) vector equation: uh (x) + λ (−p) + µ = 0N (4. OPTIMAL RESPONSE FUNCTIONS: MARSHALLIAN AND HICKSIAN DEMAND Since the constraint functions are linear in the choice variables x.4.4. denoted xh (p. In this case.4.3) are satisfied.5. provided that the utility function uh is pseudo-concave. at one price.5. The function (correspondence) xh is often called a Marshallian demand function (correspondence).70 4. (4.4. or for each p.e. Theorem 4.1 (The No Arbitrage Principle) Arbitrage opportunities do not exist in equilibrium in an economy in which at least one agent has preferences which exhibit local non-satiation. The Lagrangian.2) can be applied. and ch ∈ F . so that the optimal response correspondence is a single-valued demand function. Mh combination. the consumer’s problem has a unique solution for given prices and income. 1998 . there is a corresponding solution to the consumer’s utility maximisation problem. On the other hand.4. If the utility function uh is also strictly quasiconcave (i. and to sell the same consumption vector or its constituents. or goods with negative prices. is uh (x) + λ M − p x + µ x. directly or indirectly. then the conditions of the Kuhn-Tucker theorem on uniqueness (Theorem 3. 4.4. preferences are strictly convex). eh ∈ Xh . the first order conditions identify a maximum.2 The No Arbitrage Principle Definition 4. the weak form of the convexity axiom would permit a multi-valued demand correspondence.2 The No Arbitrage Principle is also known as the No Free Lunch Principle. or the Law of One Price 2 Revised: December 2. Mh ). directly or indirectly. In this case. Now for each p ∈ N (ruling out bads. at a higher price.1 An arbitrage opportunity means the opportunity to acquire a consumption vector or its constituents. using multipliers λ for the budget constraint and µ ∈ N for the non-negativity constraints.3) and the sign condition λ ≥ 0 with λ > 0 if the budget constraint is binding. eh .
the interest rate for two-year deposits or loans is r2 per annum compounded annually and the forward interest rate for one year deposits or loans beginning in one year’s time is f12 per annum compounded annually. then equilibrium prices can not permit arbitrage opportunities.CHAPTER 4. Examples are usually in the financial markets. the nearby consumption vector which is preferred will not be affordable. then any individual can increase wealth without bound by exploiting the available arbitrage opportunity on an infinite scale. term structure of interest rates.3 Other Properties of Marshallian demand Other noteworthy properties of Marshallian demand include the following: 1. This is because no consumption vector in the interior of the budget set can maximise utility as some nearby consumption vector will always be both preferred and affordable. 4.4. Q. With interest rates and currencies. The most powerful application is in the derivation of option-pricing formulae. this may be a non-trivial calculation. At the optimum. 1998 . If p includes a zero price (pn = 0 for some n). sell high’. When we come to consider equilibrium.E. calculate the relationship that must hold between these three rates if there are to be no arbitrage opportunities. since options can be shown to be identical to various synthetic portfolios made up of the underlying security and the riskfree security. Mh ) may not be well defined. covered interest parity. for example. we will see that if even one individual has preferences exhibiting local non-satiation.g. 2. then the budget constraint is binding. Solution: (1 + r1 ) (1 + f12 ) = (1 + r2 )2 . in a multi-period context. Thus there is no longer a budget constraint and. then xh (p. If preferences exhibit local non-satiation. since local non-satiation rules out bliss points. CHOICE UNDER CERTAINTY 71 Proof If preferences exhibit local non-satiation. The simple rule for figuring out how to exploit arbitrage opportunities is ‘buy low. utility too can be increased without bound. If the no arbitrage principle doesn’t hold.D. then Marshallian demand is not well-defined if the price vector permits arbitrage opportunities. Revised: December 2. &c. Exercise: If the interest rate for one-year deposits or loans is r1 per annum compounded annually. on the budget hyperplane. e.
72 4. thereby increasing utility without bound. In particular. if all prices and income are multiplied by α > 0. uh (x) ≥ u. The expenditure ¯ function and the indirect utility function will then act as a pair of inverse envelope functions mapping utility levels to income levels and vice versa respectively. OPTIMAL RESPONSE FUNCTIONS: MARSHALLIAN AND HICKSIAN DEMAND This is because. If the local non-satiation axiom holds.4. then the constraints are binding in both the utility-maximisation and expenditure-minimisation problems. αMh ) = xh (p. For this reason. (4. u. the consumer will seek to acquire and consume an infinite amount of the free good. Demand functions are continuous. It follows that small changes in prices or income will lead to small changes in quantities demanded. 4. Mh ) is homogenous of degree 0 in p. ++ 3. Mh .4).4 The dual problem Consider also the (dual) expenditure minimisation problem: min p x s. and we have a number of duality relations. ¯ There should really be a more general discussion of duality.t. In other words. Mh ) . 1998 .4) 4.4. there will be a one-to-one correspondence between income M and utility u for a given price vector p. being attained? ¯ The solution (optimal response function) is called the Hicksian or compensated demand function (or correspondence) and is usually denoted hh (p. ¯ x (4. The demand xh (p. at least in the case of strongly monotonic preferences. u).4.5. 5. Mh ) is independent of the representation uh of the underlying preference relation h which is used in the statement of the consumer’s problem. then demand does not change: xh (αp. namely N . xh (p. Revised: December 2.4. This follows from the theorem of the maximum (Theorem 3. what happens if expenditure is minimised subject to a certain level of utility. it is neater to define Marshallian demand only on the open non-negative orthant in N . based on the meanvariance problem as well as the utility maximisation/expenditure minimisation problems.5) In other words.
8) (4.5.4. u)) ¯ (4.10) 4. The following duality relations (or fundamental identities as ? calls them) will prove extremely useful later on: e (p. The expenditure function: eh (p. and any convex combination of the two also costs the same amount.4. The indirect utility function: vh (p. e (p.5 Envelope Functions: Indirect Utility and Expenditure Now consider the envelope functions corresponding to the two approaches: 1. then Hicksian demands may be correspondences rather than functions. 1998 (4. e (p. Hicksian demands are homogenous of degree 0 in prices: hh (αp. then any solution to the expenditure minimisation problem is unique and the Hicksian demands are well-defined single valued functions.8) adapted to the notation of the consumer’s problem.4.5)–(3. ¯ ¯ Sometimes we meet two other related functions: Revised: December 2. u)) ¯ x (p. v (p. M ) ≡ uh (xh (p.6) (4.4. a convex combination yields higher utility. if preferences are strictly convex. ¯ ¯ (4.9) These are just equations (3.4. by continuity. u) . u) = hh (p. M )) x (p. v (p. u) . If two different consumption vectors minimise expenditure. M ) h (p.5.6. 4. M )) 2. u) ≡ p hh (p.CHAPTER 4.4. ¯ If preferences are not strictly convex. u) ¯ = = = = M u ¯ h (p. and nearby there must.1) (4.2) . be a cheaper consumption vector still yielding utility u. CHOICE UNDER CERTAINTY 73 A full page table setting out exactly the parallels between the two problems is called for here.5 Properties of Hicksian demands As in the Marshallian approach. M )) v (p.7) (4. It’s worth going back to the uniqueness proof with this added interpretation. then they both cost the same amount. But by strict convexity.6.
4) is the (least) cost at prices p of being as well off as if prices were q and income was M . It follows that the maximum value of uh (x) on the subset B (pλ) is less than or equal to its maximum value on the superset B (p) ∪ B (p ). M ) is homogenous of degree zero in p. Then taking a convex combination of the last two inequalities yields λp x + (1 − λ) p x > M. which contradicts the first inequality. M ) ≡ eh (p. To see this. The following are interesting properties of the indirect utility function: 1. M ) . M . M ) ≤ max {vh (p. Revised: December 2. 4. 3. Then B (pλ) ⊆ (B (p) ∪ B (p )).3) is the (least) cost at prices p of being as well off as with the consumption vector x. uh (x)) (4. The money metric indirect utility function µh (p. or vh (λp. ENVELOPE FUNCTIONS: INDIRECT UTILITY AND EXPENDITURE 3.5.5. Suppose this was not the case. The indirect utility function is non-increasing in p and non-decreasing in M. The indirect utility function is quasi-convex in prices.5. vh (p . 4. let B (p) denote the budget set when prices are p and let pλ ≡ λp + (1 − λ) p . In terms of the indirect utility function. M ) . pλ x ≤ M but p x > M and p x > M . the indirect utility function is continuous for positive prices and income. 2. λM ) = vh (p. 1998 . i. this says that vh (pλ. The money metric utility function mh (p.e. vh (q. q. By the Theorem of the Maximum. or that vh is quasiconvex. M )} . for some x. vh (p.74 4. x) ≡ eh (p. M )) (4.
7) (4. Roy’s Identity will allow us to recover Marshallian demands from the indirect utility function. u) ¯ ¯ = λp h (pλ.5.6) (4. The expenditure function itself is non-decreasing in prices.5. e (pλ. we present four important theorems on demand functions and the corresponding envelope functions. we just fix two price vectors p and p and consider the value of the expenditure function at the convex combination pλ ≡ λp + (1 − λ) p . ¯ ¯ (4.5. The expenditure function is continuous.6. u) = (pλ) h (pλ. u) ¯ ¯ = λe (p. 75 2. u) + (1 − λ) e (p . u) . 3. The Slutsky symmetry condition and the Slutsky equation provide further insights into the properties of consumer demand. Revised: December 2. Theorem 4. 4.8) where the inequality follows because the cost of a suboptimal bundle for the given prices must be greater than the cost of the optimal (expenditureminimising) consumption vector for those prices. u) + (1 − λ) (p ) h (pλ. The expenditure function is homogenous of degree 1 in prices: eh (αp.1) (4. is just hn (p. the partial derivatives of the expenditure function with respect to prices are the corresponding Hicksian demand functions.2) which. ¯ ¯ (4. Shephard’s Lemma will allow us to recover Hicksian demands from the expenditure function. Similarly.CHAPTER 4.9) 4.1 (Shephard’s Lemma.) ∂eh ∂ (p.5. CHOICE UNDER CERTAINTY The following are interesting properties of the expenditure function: 1. u) = ¯ p x + λ (uh (x) − u) ¯ ∂pn ∂pn = xn (4.5) (4. when evaluated at the optimum.6. u) = αeh (p. u) + (1 − λ) (p ) h (p . u) ¯ ¯ ≥ λp h (p.6. since raising the price of one good while holding the prices of all other goods constant can not reduce the minimum cost of attaining a fixed utility level. u). To see this. u) . ¯ h In other words. The expenditure function is concave in prices. 1998 .5.6 Further Results in Demand Theory In this section.
u)) = u ¯ ¯ implies that ∂v ∂v ∂e (p.3) ∂M Proof For Roy’s Identity. we obtain Shephard’s Lemma: (To apply the envelope theorem. we know that the budget constraint or utility constraint will always be binding. e (p.) Q. (4.6. Theorem 4.4.6. e (p. FURTHER RESULTS IN DEMAND THEORY Proof By differentiating the expenditure function with respect to the price of good n and applying the envelope theorem (Theorem 3. e (p.7) Q.4).2 (Roy’s Identity.6. M ) (p. u)) + ¯ (p. u)) + ¯ (p.6) and expressing this last equation in terms of the relevant level of income M rather than the corresponding value of utility u: ¯ x (p. M ) = − n ∂v ∂pn ∂v ∂M (p. Revised: December 2. u)) n (p. It is obtained by differentiating equation (4.7) with respect to pn . M ) (p.4) (4. u) = 0 ¯ ¯ n ∂p ∂M ∂p and using Shephard’s Lemma gives: ∂v ∂v (p. 1998 . M ) . e (p.) Marshallian demands may be recovered from the indirect utility function using: ∂v xn (p. u)) ¯ (p. we should be dealing with an equality constrained optimisation problem.D. u) = − ¯ n ∂v ∂pn ∂v ∂M (4.4.76 4. M ) . M ) = − ∂p ∂v n (p.D. using the Chain Rule: v (p.6. e (p. and so the inequality constrained expenditure minimisation problem is essentially and equality constrained problem.5) (p. u)) hn (p.6. (4. e (p.6. u)) ¯ (4.E. see ?. however.6. u) = 0 ¯ ¯ n ∂p ∂M Hence h (p. e (p.E. if we assume local non-satiation.
8) m ∂p ∂pn Proof From Shephard’s Lemma.D.] Revised: December 2. and hence that ∂ 2 eh ∂ 2 eh = n m. u) .CHAPTER 4. its diagonal entries are non-positive. M ). .6. (4. N. ∂pn n = 1. ¯ n n ∂p ∂p ∂M where u ≡ V (p. . u) − ¯ (p.E. In particular. . unlike Marshallian demand functions. By Theorem 4. it follows that ∂hn h ≤ 0.6. the corresponding Hessian matrix is negative semi-definite.6. Another way of saying this is that own price substitution effects are always negative. (4.3 (Slutsky symmetry condition.12) . . The next result doesn’t really have a special name of its own.6.6.5 (Slutsky equation. (4. ¯ Before proving this. we know that own price substitution effects are always nonpositive. M ) hn (p.10) In other words.6. we can easily derive the Slutsky symmetry conditions. Q. M ) = (p. Hicksian demand functions. ∂ (pn )2 Using Shephard’s Lemma.6. . ∂pn and the result follows.) The total effect of a price change on (Marshallian) demand can be decomposed as follows into a substitution effect and an income effect: ∂xm ∂hm ∂xm (p. .4. are uniformly decreasing in own price. let’s consider the signs of the various terms in the Slutsky equation and look at what it means in a two-good example. .6.4 Since the expenditure function is concave in prices (see p. CHOICE UNDER CERTAINTY 77 Theorem 4. N. .6.11) n = 1.9) ∂pm ∂pn ∂p ∂p Since hm = h ∂eh ∂pm and hn = h ∂eh . 1998 (4. Theorem 4. 75).) All cross-price substitution effects are symmetric: ∂hn ∂hm h h = . Theorem 4. assuming that the expenditure function is twice continuously differentiable. [This is still on a handwritten sheet. or ∂ 2 eh ≤ 0. (4.
.7 General Equilibrium Theory 4. M )) ¯ ¯ Q. .4. .D.3 Existence of equilibrium 4. u) (which implies that u ≡ V (p. h=1 xh . define M ≡ e (p.1 The Edgeworth box 4. ¯ ¯ n ∂p ∂M ∂p To complete the proof: 1. e (p. u) . u)) + ¯ (p.9) with respect to pn yields: ∂xm ∂e ∂xm (p. Differentiating the RHS of (4. . 1998 . . Revised: December 2. xH ) is Pareto efficient if there does not exist any feasible way of reallocating the same initial aggregate endowment. xh 3 H h=1 xh = This material still exists only in handwritten form in Alan White’s EC3080 notes from 19912.2 Brouwer’s fixed point theorem 4. u) ¯ 2.2 X is Pareto dominated by X = (x1 .8 The Welfare Theorems 4. GENERAL EQUILIBRIUM THEORY Proof Differentiating both sides of the lth component of (4.6. 3 4.7. substitute from Shephard’s Lemma 3.8.7. set this equal to ∂hm ∂pn (4.1 A feasible allocation X = (x1 .7. u)) n (p.1 Walras’ law Walras . .13) (p.8. .8. H xh . using the Chain Rule. xH ) if H h xh ∀h and xh h xh for at least one h. Definition 4. . e (p.4. . . One thing missing from the handwritten notes is Kakutani’s Fixed Point Theorem which should be quoted from ?. which makes one individual better off without making any h=1 other worse off.8.7.78 4.E. 4.2 Pareto efficiency Definition 4.9) with respect to pn . will yield the so-called Slutsky equation which decomposes the total effect on demand of a price change into an income effect and a substitution effect.
D.1 (First Welfare Theorem) If the pair (p. . .) Theorem 4. H H p h=1 xh ≤ p h=1 eh .8. h = 1.4) contradicts the inequality in (4. then it follows that individual h cannot afford xh at the equilibrium prices p or p xh > p xh = p eh . Q.8. since if xh cost strictly less than xh . But since X is feasible we must have for each good n H n xh ≤ h=1 h=1 H en h and. Suppose that X is an equilibrium allocation which is Pareto dominated by a feasible allocation X .8. H). .8.8. then by local non-satiation some consumption vector near enough to xh to also cost less than xh would be strictly preferred to xh and xh would not maximise utility given the budget constraint.3) (where the equality is essentially Walras’ Law).4) But (4. hence.2) p xh ≥ p xh = p eh .3 The First Welfare Theorem (See ?.CHAPTER 4. eh .2) over households yields H H H p h=1 xh > p h=1 xh = p h=1 eh . which exibit local non-satiation and given endowments. Revised: December 2. Proof The proof is by contradiction. (4. CHOICE UNDER CERTAINTY 79 4. if individual h is indifferent between X and X or xh ∼h xh . Summing (4.3). (4. then X is a Pareto efficient allocation.1) The latter equality is just the budget constraint.E. 1998 . multiplying by prices and summing over all goods.8. (4.8.8. Before proceeding to the second welfare theorem. If individual h is strictly better off under X or xh h xh .1) and (4. Similarly.8. X) is an equilibrium (for given preferences. so no such Pareto dominant allocation Xh can exist. h . . we need to say a little bit about separating hyperplanes. then it follows that (4. which is binding since we have assumed local non-satiation.8.
1998 . Theorem 4. . .E. The idea behind the separating hyperplane theorem is quite intuitive: if we take any point on the boundary of a convex set. . then ∃p∗ = 0 in N such that p∗ z∗ ≤ p∗ z ∀z ∈ Z. continuous and strictly monotonic. Proof Not given. . which are of course convex sets. . : p z ≤ p z∗ : p z ≥ p z∗ . . then a reallocation of the initial agh gregate endowment can yield an equilibrium where the allocation is X∗ . and the normal vector as a price vector.3 (Second Welfare Theorem) If all individual preferences are strictly convex. The intersection of these two closed half-spaces is the hyperplane itself. Note that any hyperplane divides z∈ and z∈ N N : p z = p z∗ is the hyperplane through z∗ N N into two closed half-spaces. 4.8.3 The set z ∈ with normal p.4 The Separating Hyperplane Theorem Definition 4. . so that at those prices nothing giving higher utility than the cutoff value is affordable.8. z∗ ∈ int Z. THE WELFARE THEOREMS 4. We will interpret the separating hyperplane as a budget hyperplane. We will essentially be applying this notion to the upper contour sets of quasiconcave utility functions. See ? Q.2 (Separating Hyperplane Theorem) If Z is a convex subset of N and z∗ ∈ Z. it is just a plane. N .8.5 The Second Welfare Theorem (See ?.8.8. .80 4. In two dimensions.D. and if X∗ is a Pareto efficient allocation such that all households are allocated positive amounts of all goods (x∗g > 0 ∀g = 1. Theorem 4. Revised: December 2.8. a hyperplane is just a line. This allows us to give an easier proof. H). or Z is contained in one of the closed half-spaces associated with the hyperplane through z∗ with normal p∗ . we can find a hyperplane through that point so that the entire convex set lies on one side of that hyperplane. in three dimensions.) We make slightly stronger assumptions than are essential for the proof of this theorem. h = 1.
we just set xh = x∗ and observe that 0 = h x∗ . In other words. Next. (4. Now h=1 consider the set of all ways of changing the aggregate endowment without making anyone worse off: Z≡ z∈ N H : ∃xn ≥ 0 ∀g.8. First we construct a set of utility-enhancing endowment perturbations. we need to show that the zero vector is in the set Z. we could take away some of the aggregate endowment of every good without making anyone worse off than under the allocation X∗ . We need to use the fact (Theorem 3.1) that a sum of convex sets. 81 1. y ∈ Y } . and use the separating hyperplane theorem to find prices at which no such endowment perturbation is affordable. CHOICE UNDER CERTAINTY Proof There are four main steps in the proof. 4 Revised: December 2.4 H h=1 x∗ − h The zero vector is not. we interpret any vech=1 h tor of the form z = H xh − x∗ as an endowment perturbation. is also a convex set.t. To show that 0 ∈ Z. say z∗ . such as X + Y ≡ {x + y : x ∈ X. 1998 . but not in the interior of Z.5) Z is a sum of convex sets provided that preferences are assumed to be convex: H Z= h=1 Xh − {x∗ } where Xh ≡ {xh : uh (xh ) ≥ uh (x∗ )} .CHAPTER 4. again using the assumption that preferences are strictly monotonic. contradicting Pareto optimality. since then Z would contain some vector. in which all components were strictly negative. in the interior of Z. h 2. But by then giving −z∗ back to one individual.2. Given an aggregate initial endowment x∗ = H x∗ . h s. he or she could be made better off without making anyone else worse off. Note also (although I’m no longer sure why this is important) that budget constraints are binding and that there are no free goods (by the monotonicity assumption). uh (xh ) ≥ uh (x∗ ) & z = h h h=1 xh − x∗ . however.
4. we specify one way of redistributing the initial endowment in order that the desired prices and allocation emerge as a competitive equilibrium. the set Z must contain all the standard unit basis vectors ((1. 4. 0). we confirm that utility is maximised by the given Pareto efficient allocation. X∗ . 3.8. Revised: December 2. Since preferences are monotonic. allocations lying on the contract curve in the Edgeworth box. feasible allocations such that no other allocation strictly increases at least one individual’s utility without decreasing the utility of any other individual. the proof is by contradiction: the details are left as an exercise. applying the Separating Hyperplane Theorem with z∗ = 0.7 Other characterizations of Pareto efficient allocations There are a total of five equivalent characterisations of Pareto efficient allocations. . . &c.). . Finally. in two dimensions. we have a price vector p∗ such that 0 = p∗ 0 ≤ p∗ z ∀z ∈ Z. equilibrium allocations for all possible distributions of the fixed initial aggregate endowment.4 Each of the following is an equivalent description of the set of allocations which are Pareto efficient: 1. 4. 0. Q. Theorem 4. by the Welfare Theorems. If there are missing markets. 2. As usual. which is essential if it is to be interpreted as an equilibrium price vector. Next. 3. This fact can be used to show that all components of p∗ are non-negative. and redistribute the aggregate endowment of each good to consumers in proportion to their share in aggregate wealth computed in this way. THE WELFARE THEOREMS So.82 4. 1998 . All we need to do is value endowments at the equilibrium prices. then competitive trading may not lead to a Pareto optimal allocation.D.E. at these prices.8.8.6 Complete markets The First Welfare Theorem tells us that competitive equilibrium allocations are Pareto optimal if markets are complete. We can use the Edgeworth Box diagram to illustrate the simplest possible version of this principle.8. by definition. .
8. 1998 . but the relative weights will be the same.. h=1 Proof If an allocation is not Pareto efficient.t.8.. then the Pareto-dominating allocation gives a higher value of the objective function in the above problem for all possible weights..H} 83 max λh [uh (xh )] h=1 (4. ..8.9) since these two problems will have the same necessary and sufficient first order conditions. Different absolute weights (or Lagrange multipliers) arise from fixing different individuals’ utilities in the last problem. .8. ..CHAPTER 4.10) (4. The absolute weights corresponding to a particular allocation are not unique. The solution here would be unique if the underlying utility function were concave. This argument can not be used with merely quasiconcave utility functions. 5 Revised: December 2.8) where {λh }H are again any non-negative weights. CHOICE UNDER CERTAINTY 4.6) subject to the feasibility constraints H H xh = h=1 h=1 eh (4. and the constraints specify a convex set on which the objective function has a unique optimum. . If an allocation is Pareto efficient.7) for some non-negative weights {λh }H . uh (xh ) = uh (x∗ ) h = 2.8. as they can be multiplied by any positive constant without affecting the maximum. H h (4. allocations which solve: 5 H {xh :h=1. since linear combinations of concave functions with non-negative weights are concave. h=1 5.
term structure.D. For the moment this brief introduction is duplicated in both chapters. Note that corresponding to each Pareto efficient allocation there is at least one: 1. Revised: December 2. initial allocation leading to the competitive equilibrium in 2. 4.4 respectively. set of non-negative weights defining (a) the objective function in 4. The multi-period model should probably be introduced at the end of Chapter 4 but could also be left until Chapter 7.2. forward rates. Discrete time multi-period investment problems serve as a stepping stone from the single period case to the continuous time case. and 2. and (b) the representative agent in 5.84 4.9 Multi-period General Equilibrium In Section 4. 1998 .E. which illustrates the link between prices and interest rates in a multiperiod model. etc. it was pointed out that the objects of choice can be differentiated not only by their physical characteristics. MULTI-PERIOD GENERAL EQUILIBRIUM Q. The main point to be gotten across is the derivation of interest rates from equilibrium prices: spot rates. but also both by the time at which they are consumed and by the state of nature in which they are consumed. This is covered in one of the problems.9. These distinctions were suppressed in the intervening sections but are considered again in this section and in Section 5.
. This just means that each consumption plan is a random vector. The function P : A → [0.1 Introduction [To be written. called the sample space. approaches to the analysis of choice under uncertainty. sometimes overlapping.CHAPTER 5. A collection of states of the world. This framework is sufficient to illustrate the similarities and differences between the most popular approaches. and stochastic processes. over the years. . consumption plans will have to specify a fixed consumption vector for each possible state of nature or state of the world. sometimes mutually exclusive. A is a sigma-algebra of events) Revised: December 2. Trade takes place at the beginning of the period and uncertainty is resolved at the end of the period. CHOICE UNDER UNCERTAINTY 85 Chapter 5 CHOICE UNDER UNCERTAINTY 5.e. used many different. . 1] is a probability function if 1. Let A be a collection of events in Ω. Let us review the associated concepts from basic probability theory: probability space. ∞ ⇒ ∞ i=1 Ai ∈ A (i.2 Review of Basic Probability Economic theory has. When we consider consumer choice under uncertainty. A ⊆ Ω. is called an event. This chapter deals with choice under uncertainty exclusively in a single period context. Let Ω denote the set of all possible states of the world. random variables and vectors.] 5. (a) Ω ∈ A (b) A ∈ A ⇒ Ω − A ∈ A (c) Ai ∈ A for i = 1. 1998 . .
Definition 5. . State contingent claims prices are determined by the market clearing equations in a general equilibrium model: Aggregate consumption in state i = Aggregate endowment in state i. The payoffs of a typical complex security will be represented by a column vector.5.4. i. allowing for infinite and continuous sample spaces and based on additional axioms of choice. . then the investor must find a portfolio w = (w1 . Each individual will have an optimal consumption choice depending on endowments and preferences and conditional on the state of the world.4. . y2 . Or maybe I mean its transpose. where yij is the payoff in state i of security j.1) . Let Y be the M × N matrix3 whose jth column contains the payoffs of the jth complex security in each of the M states of nature. . ..2 Definition 5.4. A more thorough analysis of choice under uncertainty.4. wN ) whose payoffs satisfy N x∗ i = j=1 yij wj . Revised: December 2. yN ) . follows in Section 5. Consider a world with M possible states of nature (distinguished by a first subscript).e. markets for N securities (distinguished by a second subscript) and H consumers (distinguished by a superscript).2) Check for consistency in subscripting etc in what follows. .1 A state contingent claim or Arrow-Debreu security is a random variable or lottery which takes the value 1 in one particular state of nature and the value 0 in all other states. Y ≡ (y1 .2 A complex security is a random variable or lottery which can take on arbitrary values. Optimal future consumption is denoted ∗ x1 x∗ x∗ = 2 . x∗ N If there are N complex securities.. (5. . 2 3 (5. The set of all complex securities on a given finite sample space is an M -dimensional vector space and the M possible Arrow-Debreu securities constitute the standard basis for this vector space. yj ∈ M .CHAPTER 5. CHOICE UNDER UNCERTAINTY 89 underlying sample space comprises a finite number of states of nature. 1998 .
and thereby ensure allocational (Pareto) efficiency for arbitrary preferences. written purely as u (x0 . In real-world markets. Either a singular square matrix or < N complex securities would lead to incomplete markets. following ?. . where x0 represents the quantity consumed at date 0 and xi (i > 0) represents the quantity consumed at date 1 if state i materialises. x2 . We now present some results. options on corporate securities may be sufficient to form complete markets. then markets are complete. So far. xN ) . Then can invert Y to work out optimal trades in terms of complex securities. 5.1 If there are M complex securities (M = N ) and the payoff matrix Y is non-singular.e. 1998 .90 5. However. a portfolio with a different payout in each state of nature. An (N + 1)st security would be redundant. yielding different non-zero payoffs in each state (i.4. .E. A put option is an option to sell. Q. . PRICING STATE-CONTINGENT CLAIMS Theorem 5. .4. the number of linearly independent corporate securities is probably less than M . Now consider completion of markets using options on aggregate consumption. . Further assume that ∃ M − 1 European call options on Y with exercise prices Y1 .4. showing conditions under which trading in a state index portfolio and in options on the state index portfolio can lead to the Pareto optimal complete markets equilibrium allocation. A European call option with exercise price K is an option to buy a security for K on a fixed date. Revised: December 2.1 Completion of markets using options Assume that there exists a state index portfolio. x1 . Y2 . Proof Suppose the optimal trade for consumer i state j is xij − eij . we have made no assumptions about the form of the utility function. An American call option is an option to buy on or before the fixed date. . YM −1 . possibly one mimicking aggregate consumption). . Y . . WLOG we can rank the states so that Yi < Yj if i < j.D.
call option M − 1 (5. .4. yM yM − y1 yM − y2 . . of aggregate consumption taking the value k) be: πω (5.5) π(k) = ω∈Ωk By time-additivity and state-independence of the utility function: φω = πωui (ciω ) ui0 (ci0 ) ∀ω ∈ Ω (5. . . ..4. .4. the original state index portfolio and the M − 1 European call options yield the payoff matrix: y1 0 0 .9) = = where fi (k) denotes the i-th individual’s equilibrium consumption in those states where aggregate consumption equals k. Revised: December 2. .3) and as this matrix is non-singular. .CHAPTER 5. Instead of assuming a state index portfolio exists.8) (5. 0 . .. . 0 y3 y3 − y1 y3 − y2 .4. . yM − yM −1 = security Y call option 1 call option 2 .7) (5.. y2 y2 − y1 0 . we can assume identical probability beliefs and state-independent utility and complete markets in a similar manner (see below). . . we have constructed a complete market.. . CHOICE UNDER UNCERTAINTY 91 Here. 0 ... 5.4.4.. .4.4) and let the agreed probability of the event Ωk (i. 1998 . .e.
time-additivity of u 3.92 5. . This all assumes 1.4.4. Let {1.3 Completing markets with options on aggregate consumption Let x(k) be the vector of payoffs in the various possible states on a European call option on aggregate consumption with one period to maturity and exercise price k. .12) (5. an arbitrary security x has value: Sx = ω∈Ω φωxω φωxω k ω∈Ωk (5. identical probability beliefs 2.. state-independent u Revised: December 2.4.4. Then payoffs are as given in Table 5.4. 2.1. Therefore. .13) (5.10) (5. 1998 .11) (5.4.4. L} be the set of possible values of aggregate consumption C(ω).1: Payoffs for Call Options on the Aggregate Consumption State-independence of the utility function is required for fi (k) to be well-defined.
4 Replicating elementary claims with a butterfly spread Elementary claims against aggregate consumption can be constructed as follows. but we will acknowledge the additional structure now described by terming them lotteries. . .5. . 1 L−2 L−1 L−1 L 1 1 0 (5. . 1998 . must. X will now denote the set of possible values of the lotteries in L. 0 i. for state 1. . we let L denote the collection of lotteries under consideration. . and 0 otherwise. denoted by the set T . . . . . The possible states of the world are denoted by the set Ω. . a consumption plan must specify a k-dimensional vector. . Revised: December 2. equal the prices of the corresponding replicating portfolios. a stochastic process.e. for example.4. . Again to distinguish the certainty and uncertainty cases. using a butterfly spread: [x(0) − x(1)] − [x(1) − x(2)] yields the payoff: (5. for each time and state of the world. CHOICE UNDER UNCERTAINTY 93 5. If there are k physical commodities. 5. this replicating portfolio pays 1 iff aggregate consumption is 1. . and the other elementary claims. We assume a finite number of times. x ∈ k .16) = 0 . .e.4.4.1 Further axioms The objects of choice with which we are concerned in a world with uncertainty could still be called consumption plans. .15) 0 1 0 0 0 1 1 1 2 1 1 0 3 − 2 − 2 − 1 = 1 − 1 . So a consumption plan or lottery is just a collection of |T | k-dimensional random vectors.CHAPTER 5. .5 The Expected Utility Paradigm 5. i. The prices of this. by no arbitrage. . .
and continuous. Then. namely the expected utility property.) Axiom 10 (Sure Thing Principle) If probability is concentrated on a set of sure things which are preferred to q. ⊕ £0. We will continue to assume that preference relations are complete. Although we have moved from a finite-dimensional to an infinite-dimensional problem by explicitly allowing a continuum of states of nature. 1998 . a˜ ⊕ (1 − a) r p ˜ (The Archimedian axiom is just a generalisation of the continuity axiom. b ∈ (0. X can be identified with a subset of L. One justification for persisting with the independence axiom is provided by ?. ⊕ 0.p. it can be shown that the earlier theory of choice under certainty carries through to choice under uncertainty. 10 1 £5m.) 1.5.11£1m. q ˜ r q ˜ r then ˜ q ˜ b˜ ⊕ (1 − b) r.t. ⊕ 0. However. Revised: December 2.) Now let us consider the Allais paradox. transitive. THE EXPECTED UTILITY PARADIGM Preferences are now described by a relation on L. we would like utility functions to have a stronger property than continuity. unless the substitution axiom is contradicted: 1£1m. by the substitution axiom again.1£5m. then ˜ ∃a. reflexive. then so does the independence axiom above. (The Sure Thing Principle is just a generalisation of the Substitution Axiom. 11 11 Finally. ⊕ 0. 1] and p ˜ a˜ ⊕ (1 − a) r p ˜ Axiom 9 (Archimedian Axiom) If p ˜ a˜ ⊕ (1 − a) r ∀˜ ∈ L. in particular a preference relation can always be represented by a continuous utility function on L.89£0 0. Axiom 8 (Substitution or Independence Axiom) If a ∈ (0. then the associated consumption plan is also preferred to q. p ˜ q .89£1m. in that each sure thing in X can be identified with the trivial lottery that pays off that sure thing with probability (w. 1) s.1£5m. 0. 0.01£0. If these appears counterintuitive.9£0. Suppose 1£1m.94 5. ⊕ 0.
p. there must exist maximal and minimal sure things. u ({xt }) dF{˜t } ({xt }) x Such a representation will often be called a Von Neumann-Morgenstern (or VNM) utility function. ˜ there exists a unique V (˜) such that p p ∼ V (˜) p+ ⊕ (1 − V (˜)) p− . say p+ and p− respectively. By the substitution axiom. However. CHOICE UNDER UNCERTAINTY 95 5. after its originators (?). Any strictly increasing transformation of a VNM utility function represents the same preferences. Definition 5.CHAPTER 5. Proof of this is left as an exercise.5. Proof We will just sketch the proof that the axioms imply the existence of an expected utility representation. the proof of the converse is left as an exercise. these are maximal and minimal in L as well as in X . 1998 .) From the Archimedean axiom.5. such that V ({˜t }) = E [u ({˜t })] x x = .1 If X contains only a finite number of possible values.. it can be deduced that for every other lottery. or just an expected utility function. see ?. and a simple inductive argument.1 Let V : L → be a utility function representing the preference relation . then an inductive argument can no longer be used and the Sure Thing Principle is required. Since X is finite.5.. ˜ p p Revised: December 2. then the substitution and Archimidean axioms are necessary and sufficient for a preference relation to have an expected utility representation.2 Existence of expected utility functions A function u: k×|T | → can be thought of as a utility function on sure things. We will now consider necessary and sufficient conditions on preference relations for an expected utility representation to exist. Theorem 5. (If X is not finite. u. only increasing affine transformations: f (x) = a + bx (b > 0) retain the expected utility property. For full details. and unless the consumer is indifferent among all possible choices. Then is said to have an expected utility representation if there exists a utility function on sure things.
It can be found in ?.5. Chapter 6 will consider the problem of portfolio choice in considerable depth.E. must continue with some basic analysis of the choice between one riskfree and one risky asset. using the definitions of V (˜) and V (˜).g. THE EXPECTED UTILITY PARADIGM It is easily seen that V represents . The basic objects of choice under expected utility are not consumption plans but classes of consumption plans with the same cumulative distribution function. This chapter.D.E. Note that expected utility depends only on the distribution function of the consumption plan.2 For more general L. 2. following ?. Q. Such an example is sufficient to show several things: 1. Two consumption plans having very different consumption patterns across states of nature but the same probability distribution give the same utility. probably local risk neutrality and stuff like that too. There is no guarantee that the portfolio choice problem has any finite or unique solution unless the expected utility function is concave. however.96 5. Revised: December 2.D.. if wet days and dry days are equally likely. Theorem 5. x ˜ ˜ Define z ≡ π˜ ⊕ (1 − π) y . E. Q.. every lottery can be reduced recursively to a two-outcome lottery when there are only a finite number of possible outcomes altogether. to these conditions must be added some technical conditions and the Sure Thing Principle. ˜ x ˜ Then. 1998 . then an expected utility maximiser is indifferent between any consumption plan and the plan formed by switching consumption between wet and dry days.5. Proof We will not consider the proof of this more general theorem.
3) (5. π ∈ (0. x x 2 (5.2).6.1) Taking expectations on both sides.2. x x x x x (5. which just says that f (E [˜ ]) ≥ E [f (˜ )] .3) given by Theorem 3. once more yielding: f (E [˜ ]) ≥ E [f (˜ )] .v. x x (5.4) 3. the first order term will again disappear. One can also appeal to the second order condition for concavity. 1.2. CHOICE UNDER UNCERTAINTY 97 5.6. x x (5.3.1 (Jensen’s Inequality) The expected value of a (strictly) concave function of a random variable is (strictly) less than the same concave function of the expected value of the random variable. but only one provides a fully general and rigorous proof. ˜ ˜ E u W ≤ u E W when u is concave Similarly. Using a similar approach to that used with the Taylor series expansion in (5.5) Revised: December 2.1) in terms of a discrete ˜ random vector x taking on the value x with probability π and x with probability 1 − π: ∀x = x ∈ X. One can reinterpret the defining inequality (3.2) An inductive argument can be used to extend the result to all discrete r. the expected value of a (strictly) convex function of a random variable is (strictly) greater than the same convex function of the expected value of the random variable. where the two vectors considered are ˜ the mean E [˜ ] and a generic value x: x f (˜ ) ≤ f (E [˜ ]) + f (E [˜ ]) (˜ − E [˜ ]) .6.6 Jensen’s Inequality and Siegel’s Paradox Theorem 5. 1) f (πx + (1 − π)x ) ≥ πf (x) + (1 − π)f (x ). take expectations on both sides of the first order condition for concavity (3. Without loss of generality.6.CHAPTER 5. and the second order Taylor series expansion of f around E [˜ ]: x 1 x E[f (˜)] = f (E[˜]) + f (x∗ )Var [˜] .6. but runs into problems if the number of possible values is either countably or uncountably infinite.6.3. 1998 . consider the concave case.2.s with a finite number of possible values. 2. Proof There are three ways of motivating this result.
so E[f (˜)] ≤ f (E[˜]). Another nice application of Jensen’s Inequality in finance is: Theorem 5. Q. Revised: December 2.7) This shows that the difference is larger the larger is the curvature of f (as measured by the second derivative at the mean of x) and the larger is the variance of x. JENSEN’S INEQUALITY AND SIEGEL’S PARADOX for some x∗ in the support of x. strictly concave functions and strictly concave functions are almost identical.2. ˜ ˜ One area in which this idea can be applied is the computation of present values based on replacing uncertain future discount factors with point estimates derived from expected future interest rates.6) The arguments for convex functions.8): 1 E[f (˜)] ≈ f (E[˜]) + f (E[˜])Var [˜] .E. this supposes that x∗ is fixed. we have shown that our initial hypothesis is untenable in terms of a different numeraire. then Jensen’s Inequality tells us that 1 1 1 = < Et .6.D.2 (Siegel’s Paradox) Current forward (relative) prices can not all equal expected future spot prices. correlated with x. If ˜ Et St+1 = Ft . ˜ Accepting this (wrong) approximation. then by Theorem 3. 1 This result is often useful with functions such as x → ln x and x → x . However. ˜t ˜t+1 ˜ F S Et St+1 ˜ except in the degenerate case where St+1 is known with certainty at time t.6. ˜ whereas in fact it varies with the value taken on by x.4 the second derivative is non-positive and the variance is non-negative.98 5. x x x x 2 (5. we can again x x use the following (wrong) second order Taylor approximation based on (5. x x (5. ˜ Proof Let Ft be the current forward price and St+1 the unknown future spot price.6. But since the reciprocals of relatives prices are also relative prices. and is itself a random ˜ variable. if f is concave.3. 1998 .6. To get a feel for the extent to which E[f (˜)] differs from f (E[˜]).
a (strictly) risk averse individual is one whose VNM utility function is (strictly) concave. The original and most obvious application of Siegel’s paradox is in the case of ˜ ˜ currency exchange rates.) Figure 1. 1998 ( )W0 + ph1 + (1 − p) h2 . CHOICE UNDER UNCERTAINTY 99 Q. In its general form.E. or indifferent to.e.17. any actuarially fair gamble. a (strictly) risk loving individual is one whose VNM utility function is (strictly) convex.CHAPTER 5. Ft and St are stochastic processes representing forward and spot exhange rates respectively.e.1 goes here. In other words. However. In that case. 5. Siegel’s paradox applies equally well to any theory which uses current prices as a predictor of future values.D.6). Similarly. But this is an internally inconsistent hypothesis. (An individual is strictly risk averse if he or she is unwilling to accept any actuarially fair gamble.7 Risk Aversion An individual is risk averse if he or she is unwilling to accept. this says that current prices should fully reflect all available information about (expected) future values. say: ˜ ˜ Ft = Et St+1 . i. ∀W0 and ∀p. Attempts to make the words fully reflect in any way mathematically rigorous quickly run into problems. Finally. h2 such that ph1 + (1 − p) h2 = 0 W0 i. u (W0 ) ≥ (>)pu (W0 + h1 ) + (1 − p) u (W0 + h2 ) The following interpretation of the above definition of risk aversion is based on Jensen’s Inequality (see Section 5. Revised: December 2. It seems reasonable to assume that forward exchange rates are good predictors of spot exchange rates in the future. h1 . a risk neutral individual is one whose VNM utility function is affine. Another such theory which is enormously popular is the Efficient Markets Hypothesis of ?.
Some people are more risk averse than others. We can distinguish between local and global risk aversion..7. decreasing) absolute risk aversion (IARA. some functions are more concave than others.7. However. Definition 5. decreasing) relative risk aversion (IRRA. DARA) ⇐⇒ RA (w) > (=.100 5. u (w) alone is meaningless. CRRA. the above ratio is independent of the expected utility function chosen to represent the preferences. and represent behaviour which is locally risk averse at some wealth levels and locally risk loving at other wealth levels. CARA. in most of what follows we will find it convenient to assume that individuals are globally risk averse.1 (The Arrow-Pratt coefficient of) absolute risk aversion is: RA (w) = −u (w)/u (w) which is the same for u and au + b. The utility function u exhibits increasing (constant. An individual is locally risk averse at w if u (w) < 0 and globally risk averse if u (w) < 0 ∀w. <) 0 ∀w.2 (The Arrow-Pratt coefficient of) relative risk aversion is: RR (w) = wRA (w) The utility function u exhibits increasing (constant. Note that this varies with the level of wealth. as u and u can be multiplied by any positive constant and still represent the same preferences. in the sense that they will never have a bet unless they believe that the expected return on the bet is positive. However. DRRA) ⇐⇒ Revised: December 2. 1998 . how do we measure this? The importance and usefulness of the Arrow-Pratt measures of risk aversion which we now define will become clearer as we proceed.7. Individuals who are globally risk averse will never gamble. RISK AVERSION Most functions do not fall into any of these categories. Cut and paste relevant quotes from Purfield-Waldron papers in here. in particular from the analysis of the portfolio choice problem: Definition 5.
u(w) = w − w2 . CHOICE UNDER UNCERTAINTY 101 RR (w) > (=. • Negative exponential utility (CARA. 1/b is called the bliss point of the quadratic utility function. 2 u (w) = 1 − bw. 1/b should be rather large and thus b rather small. <) 0 ∀w. B > 0. DARA): u(w) = 1 B w1− B . marginal utility is positive and utility increasing if and only if w < 1/b. IRRA): b b > 0. u (w) = −b < 0 b RA (w) = 1 − bw dRA (w) b2 = >0 dw (1 − bw)2 In this case. Note: • CARA or IARA ⇒ IRRA • CRRA or DRRA ⇒ DARA Here are some examples of utility functions and their risk measures: • Quadratic utility (IARA. b > 0. IRRA): u(w) u (w) u (w) RA (w) dRA (w) dw = = = = −e−bw . −b2 e−bw < 0 b = 0 • Narrow power utility (CRRA. be−bw > 0.CHAPTER 5. B = 1 B−1 Revised: December 2. For realism. w > 0. 1998 .. so that u(w) → ln w.102 5. z > 0. 1998 . 5. The solution is roughly as follows:4 u(z) = u (z) = u (z) = RA (z) = RR (z) = 1 B z 1− B . THE MEAN-VARIANCE PARADIGM The proofs in this case are left as an exercise.. Taylor-approximated utility functions (see Section 2.8 The Mean-Variance Paradigm Three arguments are commonly used to motivate the mean-variance framework for analysis of the portfolio choice problem: 1. and in fact it almost certainly will not. Check ? for details of this one.8.8. If all investors are risk neutral.2) Add comments: u (w) > 0. Revised: December 2.8.1) (5. If there are risk averse or risk loving investors. then there is no reason for this result to hold. u (w) < 0 and u (w) → 1/w as B → 1. then prices will adjust in equilibrium so that all securities have the same expected return.
3) (5. Some counterexamples of both types are probably called for here. It can be shown fairly easily that an increasing utility function which exhibits non-increasing absolute risk aversion has a non-negative third derivative. and to speculate about further extensions to higher moments. investors will be concerned with both growth (return) and security (risk). of course.8. the expected wealth at time t Revised: December 2. 2.CHAPTER 5.4) It follows that the sign of the nth derivative of the utility function determines the direction of preference for the nth central moment of the probability distribution of terminal wealth. a positive third derivative implies a preference for greater skewness. Note that the expected utility axioms are neither necessary nor sufficient to guarantee that the Taylor approximation to n moments is a valid representation of the utility function..9 The Kelly Strategy In a multi-period. depends on preferences.8. but some useful benchmarks exist.5) (5.8. and investors will be concerned with finding the optimal tradeoff. .6) (5. There are three ways of measuring growth: 1.7) 5. (5. investment framework. Normally distributed asset returns. For example.8. or maybe can be left as exercises. discrete time. There will be a trade-off between the two.8.
PORTFOLIO THEORY 2. P 1 . 3. + (1 + r) (1 + r) (1 + r) (1 + r)T 5. The (net) present value (NPV) of a stream of cash flows. PT . P0 . 0 + 1 + 2 + . . continuously compounded rates aggregate nicely across time. P0 . 107 We can solve this equation for any of four quantities given the other three ((a)–(d) above). . 4. continuous discounting yields a higher present value than does discrete. given an initial value. continuous compounding yields a higher terminal value than discrete compounding for all interest rates. with equality for a zero interest rate only. . The internal rate of return (IRR) of the stream of cash flows. that y = 1+r is the tangent to y = er at r = 0 and hence that er > 1 + r ∀r. Note also that the exponential function is its own derivative.CHAPTER 6.. given a final value. . Similarly. is the solution of the polynomial of degree T obtained by setting the NPV equal to zero: P0 P1 P2 PT = 0. 1998 . P1 . + (1 + r) (1 + r) (1 + r) (1 + r)T Revised: December 2. 0 + 1 + 2 + . .. P T is N P V (r) ≡ P0 P1 P2 PT .. ... positive and negative. In other words. . Continuous compounding: Pt = ert P0 . .
Consider as an example the problem of calculating mortgage repayments. 1.2. discrete time problem. in other words one corresponding to a positive IRR. . in particular in this chapter. . . . The analysis of the multiperiod. 6. Simple rates of return are additive across portfolios. is quite similar. rN ) r ˜ vector of expected returns variance-covariance matrix of returns (1.2 Notation The investment opportunity set for the portfolio choice problem will generally consist of N risky assets. . Revised: December 2. 1) N -dimensional vector of 1s amount invested in jth risky asset (w1 . Continuously compounded rates of return are additive across time. infinite horizon.2: Notation for portfolio choice problem In general. . . . From time to time. Make this into an exercise. The notation used throughout this chapter is set out in Table 6. we will add a riskfree asset.2. wn ) return on the portfolio w the investor’s desired expected return Table 6. concentrating on the conditional distribution of the next period’s returns given this period’s. so we use them in multi-period single variable studies. such as in Chapter 7. Conditions have been derived under which there is only one meaningful real root to this polynomial equation.2. . and the unconditional distribution of returns. . NOTATION AND PRELIMINARIES the investor’s initial wealth the investor’s desired expected final wealth number of risky assets return on the riskfree asset return on jth risky asset (˜1 . .2 1 2 These conditions are discussed in ?.1 Consider a quadratic example. the polynomial defining the IRR has T (complex) roots. The presentation is in terms of a single period problem. .. so we use them in one period cross sectional studies.
In practice short-selling means promising (credibly) to pay someone the same cash flows as would be paid by a security that one does not own.2. The vector of net trades carried out by an investor moving from the portfolio w0 to the portfolio w1 can be thought of as the hedge portfolio w1 − w0 . PORTFOLIO THEORY 109 Definition 6. which are usually thought of as the ratio of profit to initial investment. A number of further comments are in order at this stage.1 w is said to be a unit cost or normal portfolio if its weights sum to 1 (w 1 = 1). In the literature. r Note that where we initially worked with net rates of return ( P1 − 1). initial wealth is often normalised to unity (W0 = 1). W0 w. The payoff on a unit cost or normal portfolio is equivalent to the gross return.CHAPTER 6. Since we will be dealing on occasion with hedge portfolios. This can be defined unambiguously as follows. N . 1998 . . to pay the current market price of the security to end the arrangement. which is just the gross return per pound invested. Definition 6. wi ≥ 0 for i = 1. These terms are meaningless for a hedge portfolio as the denominator is zero. 2. Thus when short-selling is allowed w can have negative components. we will speak of the gross return on a portfolio or the portfolio payoff. but the development of the theory will be more elegant if we avoid this. There is no ambiguity about the payoff on one of the original securities. then the portfolio choice problem will have non-negativity constraints. . Instead. . we P0 will deal henceforth with gross rates of return ( P1 ). P0 Revised: December 2.3 Short-selling a security means owning a negative quantity of it.2. It is just w ˜. if required. we will in future avoid the concepts of rate of return and net return. It will hopefully be clear from the context which meaning of ‘portfolio’ is intended. Definition 6. . 1. The payoff on a zero cost r portfolio can also be defined as w ˜.2 w is said to be a zero cost or hedge portfolio if its weights sum to 0 (w 1 = 0). if shortselling is not allowed. The portfolio held by an investor with initial wealth W0 can be thought of either 1 as a w with w 1 = W0 or as the corresponding normal portfolio.2. always being prepared.
are (strictly) risk-averse i. preferences have the expected utility representation: v(˜) = E[u(˜)] z z = u(z)dFz (z) ˜ where v is the utility function for random variables (gambles. N • (W0 − Date 1 payoff: • wj rj from jth risky asset ˜ • (W0 − j j (6. j = 1. lotteries) and u is the utility function for sure things.3. have von Neumann-Morgenstern (VNM) utilities: i. .3.3. ˜ W = (W0 − j wj )rf + j wj rj ˜ = W0 rf + j wj (˜j − rf ) r Revised: December 2. 1998 . prefer more to less i. . .3 The Single-period Portfolio Choice Problem 6.3. u is increasing: u (z) > 0 ∀z 3.2) wj ) in risk free asset wj )rf from risk free asset It is assumed here that there are no constraints on short-selling or borrowing (which is the same as short-selling the riskfree security).e.1) (6. 2. we assume that individuals: 1. THE SINGLE-PERIOD PORTFOLIO CHOICE PROBLEM 6.110 6.1 The canonical portfolio problem Unless otherwise stated. u is strictly concave: u (z) < 0 ∀z Date 0 investment: • wj (pounds) in jth risky asset.e. The solution is found as follows: Choose wj s to maximize expected utility of date 1 wealth. .e.
3. .3. In other words. f is a strictly concave (and hence strictly quasiconcave) function.4.CHAPTER 6.4 guarantee that the first order conditions have a unique solution. Since we have assumed that investor behaviour is risk averse. using ˜ u W) the stochastic discount factor E[u ((W )]r .3.e. . which ends up being the same for all ˜ f investors. Theorems 3.3. .3.5) ∀j.6) (6.8) Suppose pj is the price of the random payoff xj .3.3 and 3.3) h Ah < 0 ∀h = 0N if and only if u < 0. . by Theorem 3. payoffs are valued by taking their expected present value. Then rj = ˜ ˜ pj = E ˜ u (W ) xj ˜ ˜ E[u (W )]rf ∀j. (6. (6. Practical corporate finance and theoretical asset pricing models to a large extent are (or should be) concerned with analysing this discount factor. Thus.3) is: ˜ r ˜ E[u (W )˜j ] = E[u (W )]rf or ˜ ˜ ˜ ˜ Cov u (W ). rj ˜ E[u (W )] ∀j.2. A is a negative definite matrix and.4) ∀j. Another way of writing (6. The rest of this section should be omitted until I figure out what is going on. (We could consider here the explicit example with quadratic utility from the problem sets. PORTFOLIO THEORY i. xj ˜ pj ∀j. (6. rj + E[u (W )]E[˜j ] = E[u (W )]rf r or E[˜j − rf ] = r ˜ ˜ Cov u (W ).) Revised: December 2.7) and (6. (6. max f (w1 .3. under the present assumptions. 1998 . The trivial case in which the random returns are not really random at all can be ignored. .3.3.
One useful application of elasticity is in analysing the behaviour of the total revenue function associated with a particular inverse demand function.9) (6. assume only one risky asset (N = 1). or the slope of the graph of the function on log-log graph paper. P (Q).3.10) (6. f (x) ∂xi ln Roughly speaking.2 Risk aversion and portfolio composition For the moment. the elasticity is just ∂∂ln xfi .11) Revised: December 2.12) dQ dQ 1 = P 1+ .1 Let f : X → ++ be a positive-valued function defined on X ⊆ k ∗ ++ . and elastic when the absolute value of the elasticity is greater than unity.3. Definition 6. 1998 .112 6. Then the elasticity of f with respect to xi at x is x∗ ∂f ∗ i (x ) . We have: dP (Q) Q dP = q +P (6. A function is said to be inelastic when the absolute value of the elasticity is less than unity. (6.3.3. The borderline case is called a unit elastic function.3. THE SINGLE-PERIOD PORTFOLIO CHOICE PROBLEM 6.3. We first consider the concept of local risk neutrality.13) η (6.3.
2 W0 W0 da .3. ˜ r dW0 −E[u (W )(˜ − rf )2 ] (6.CHAPTER 6.17) Revised: December 2. increasing when elasticity is less than −1 (demand is elastic). a dW0 (6. and decreasing when elasticity is between 0 and −1 (demand is inelastic).1 DARA ⇒ RISKY ASSET NORMAL Proof By implicit differentiation of the now familiar first order condition (6. total revenue is constant or maximised or minimised where elasticity equals −1. PORTFOLIO THEORY 113 Hence.3. 1998 .3.15) since by assuming a positive expected risk premium on the risky asset we guarantee (by local risk-neutrality) that a is positive.3.3). r r we have ˜ r da E[u (W )(˜ − rf )]rf = .3. which can be written: E[u (W0 rf + a(˜ − rf ))(˜ − rf )] = 0.14) dW0 Note that sign a d W0 dW0 = sign (η − 1) (6.16) (6. The first such result is due to ?.114 6. The other results are proved similarly (exercise!). For example.21) (6. ˜ Revised: December 2.3. or all investors with similar probability beliefs might choose the same portfolio. We are interested in conditions under which large groups of investors will agree on portfolio composition.3. 3 Think about whether separating out the case of r = rf is necessary. the denominator is positive. )]. all investors with similar utility functions might choose the same portfolio.20) (6.D.3.E. Therefore: ˜ r sign (da/dW0 ) = sign {E[u (W )(˜ − rf )]} We will show that both are positive. we may be able to define a group of investors whose portfolio choices all lie in a subspace of small dimension (say 2) of the N -dimensional portfolio space. 1998 . Q.3. .18) provided that r > rf with positive probability. 6. More realistically. (6. investors delegate portfolio choice to mutual fund operators or managers.3. THE SINGLE-PERIOD PORTFOLIO CHOICE PROBLEM By concavity.3 Mutual fund separation Commonly.19) (6.3.21) is 0 at the optimum. hence the LHS is positive as claimed.
The interested reader is referred to ?.23) where A. the utility function is of one of these types: • Extended power: u(z) = 1 (A (C+1)B (6.3. ∀ portfolios p. ∃ Hyperbolic Absolute Risk Aversion (HARA.3. CARA) i. r r j (6. i. incl. B and C are again chosen to guarantee u > 0.t.27) Revised: December 2.25) (6. VNM utility) hold the same risky unit cost portfolio. We will show that u (z) = (A + Bz)C is sufficient for two-fund separation. marginal utility satisfies u (z) = (A + Bz)C or u (z) = A exp{Bz} (6. Proof The proof that these conditions are necessary for two fund separation is difficult and tedious.24) (6.CHAPTER 6.3. p∗ say. ∃λ s.3. B and C are chosen to guarantee u > 0.2 ∃ Two fund monetary separation i.3.e.26) or equivalently to the system of equations E[(1 + j Bwj (˜j − rf ))C (˜i − rf )] = 0 r r A + BW0 rf (6. strictly concave.e. u < 0. Agents with different wealths (but the same increasing.e. (but may differ in the mix of the riskfree asset and risky portfolio) i.22) + Bz)C+1 • Logarithmic: u(z) = ln(A + Bz) A • Negative exponential: u(z) = − B exp{Bz} where A. PORTFOLIO THEORY 115 Theorem 6.3. 1998 . u < )].3. wealths W0 .
1 The portfolio frontier in risky assets only The portfolio frontier Definition 6. But the risky portfolio weights are xj = wi j wj = = Bwi /(A + BW0 rf ) j Bwj /(A + BW0 rf ) xi j xj (6. Since the dollar investment in the jth risky asset satisfies: wj = xj ( A + W0 rf ) B (6.28). Revised: December 2. the unique solutions for xj are also independent of those parameters. an expected rate of return of µ). Some humorous anecdotes about Cass may now follow. However.116 or 6.29) (6. equivalently. MATHEMATICS OF THE PORTFOLIO FRONTIER E[(1 + j xj (˜j − rf ))C (˜i − rf )] = 0 r r (6.3.31) we also have in this case that the dollar investment in the common risky portfolio is a linear function of the initial wealth. 6. 1998 N : .3. Q.D. they do depend on C.3.E.4.4. A + BW0 rf The unique solutions for xj are clearly independent of W0 which does not appear in (6.4.30) where and so are also independent of initial wealth.3. but with the smallest possible variance of final wealth.4 Mathematics of the Portfolio Frontier 6. Since A and B do not appear either. The other sufficiency proofs are similar and are left as exercises.28) Bwj .
The notation here follows ?.2 w is a frontier portfolio ⇐⇒ its return has the minimum variance among all portfolios that have the same cost. Revised: December 2.4.4. in mean-variance space or mean-standard deviation space ( + × ). The properties of this two-moment frontier are well known.2) (6. 1998 . Formally.4.4. W0 . (6. This assumption is not essential and will be avoided. The frontier in this case is the set of solutions for all values of W0 and W1 (or µ) to this variance minimisation problem.CHAPTER 6. in ? or ?. equals 1. and the same expected payoff. which is just N . PORTFOLIO THEORY 117 The mean-variance frontier can also be called the two-moment portfolio frontier. introductory treatments generally present it (without proof) as the envelope function.. of the variance minimisation problem. for example. w 1.4. and can be found.2) and (6. Definition 6. However.3). assuming that initial wealth. in recognition of the fact that the same approach can be extended (with difficulty) to higher moments. equivalently. w e.4) subject to the same linear constraints (6.1) The first constraint is just the budget constraint.4. The derivation of the meanvariance frontier is generally presented in the literature in terms of portfolio weight vectors or. while the second constraint states that the expected rate of return on the portfolio is at least the desired mean return µ. The mean-variance portfolio frontier is a subset of the portfolio space. We will begin by supposing that all assets are risky. or to the equivalent maximisation problem: max −w Vw w (6.3) (6.
118 The solution 6.4. the place of the matrix A in the canonical quadratic programming problem is taken by the (symmetric) negative definite matrix. that not every portfolio has the same expected return.5) guarantees that the 2 × N matrix G is of full rank 2. V. In the portfolio problem. that the variance-covariance matrix.t. w Vw = 0 Then ∃ a portfolio whose return w ˜ = rw has zero variance. suppose ∃w = 0N s. and g2 = e and α2 = W1 . but we require this slightly stronger condition. which is just the negative of the variance-covariance matrix of asset returns.4. Arbitrage will force the returns on all riskless assets to be equal in equilibrium.4) that V must be positive semi-definite.5) (6. except that it has explicitly one equality constraint and one inequality constraint.5. r and in particular that N > 1. We will call these columns g and h and write the solution as w = W0 g + W1 h = W0 (g + µh) . 2.4.4. (3. g1 = 1 and α1 = W0 . so this situation is equivalent economically to the introduction of a riskless asset later. e = E[˜1 ]1. we require: 1. W1 . The parallels are a little fuzzy in the case of the budget constraint since it is really an equality constraint. with columns weighted by initial wealth W0 and expected final wealth. r ˜ This implies that rw = r0 (say) w.5. To see why.e. We already know from (1. Revised: December 2. −V.1 or. essentially.12. MATHEMATICS OF THE PORTFOLIO FRONTIER The inequality constrained maximisation problem (6.4) is just a special case of the canonical quadratic programming problem considered at the end of Section 3.4. i.p.7) . 1998 (6.39) says that the optimal w is a linear combination of the two columns of the N × 2 matrix −1 V−1 G GV−1 G . (6.4. that this portfolio is ˜ riskless.6) (6. is (strictly) positive definite. To avoid degeneracies.
11) γ λ (V−1 1) + (V−1 e). (3.10) (6. with columns weighted by the Lagrange multipliers corresponding to the two constraints.4.4.4. where we define: A ≡ 1 V−1 e = e V−1 1 B ≡ e V−1 e > 0 C ≡ 1 V−1 1 > 0 and D ≡ BC − A2 (6. λ = 0. 1 w = g + µh.4. PORTFOLIO THEORY 119 The components of g and h are functions of the means and variances of security returns. We know that for the portfolio which minimises variance for a given initial wealth. the corresponding Lagrange multiplier. W0 is independent of the initial wealth W0 .8) . It is easy to see the economic interpretation of g and h: • g is the frontier portfolio corresponding to W0 = 1 and W1 = 0. C A (6. Thus γ (V−1 1) is the global minimum variance portfolio with cost W0 (which in fact C Revised: December 2. • Similarly. Alternatively. This allows the solution to be written as: w= 1 (V−1 1) C (6. regardless of expected final wealth.5. so γ + λ = W0 .9) (6. it is the hedge portfolio which would be purchased by a variance-minimising investor in order to increase his expected final wealth by one unit.13) 1 and A (V−1 e) are both unit portfolios.12) and the inequalities follow from the fact that V−1 (like V) is positive definite. In other words. Thus the vector of optimal portfolio proportions. h is the frontier portfolio corresponding to W0 = 0 and W1 = 1.4.CHAPTER 6. it is the normal portfolio which would be held by an investor whose objective was to (just) go bankrupt with minimum variance. 1998 . In other words.4.
parallel to h. Since V is a non-singular. we can combine (6. it defines a well behaved scalar product and all the standard results on orthogonal projection (&c. They will drop out of the portfolio decomposition below. In fact. Orthogonal decomposition of portfolios At this stage. It follows immediately that the frontier (like any straight line in N ) is a convex set. we must introduce a scalar product on the portfolio space. g and h.7) and (6. In N . An important exercise at this stage is to work out the means. 1998 . which is generated either by the vectors g and h or by the vectors wMVP and h (or by any pair of linearly independent frontier portfolios).4.4. and can be generated by linear combinations of any pair of frontier portfolios with weights of the form α and (1 − α). variances and covariances of the returns on wMVP . W1 ) combinations (including negative W0 ) is the vector subspace of the portfolio space. which we will denote wMVP . positive definite matrix. the set of unit cost frontier portfolios is the line passing thru g. MATHEMATICS OF THE PORTFOLIO FRONTIER 1 equals γ in this case) and C (V−1 1) is the global minimum variance unit cost portfolio.15) (6.13) and write the solution as: w = W0 wMVP + µ − A h . C (6.4.16) (6.4 The set of solutions to this quadratic programming problem for all possible (W0 .4. 4 (6..14) The details are left as an exercise.17) (6.4. namely that based on the variance-covariance matrix V.120 6.4.18) At least for now. Revised: December 2.4.4.) from linear algebra are valid.
2). A similar geometric interpretation can be applied in higher dimensions.. it is orthogonal to all portfolios w for which w VV−1 1 = 0 or in other words to all portfolios for which w 1 = 0. W0 wMVP . The centre of the concentric ellipses is at the global minimum variance portfolio corresponding to W0 . Similarly. For N = 3. Note that wMVP and h are orthogonal vectors in this sense. then w and u are uncorrelated.2. since w VV−1 e = 0 or in other words w e = 0. be applied interchangeably to pairs of portfolios. the terms ‘orthogonal’ and ‘uncorrelated’ may legitimately. In fact. w2 ˜ = 0 r r ⇐⇒ the random variables representing the returns on the portfolios are uncorrelated. and the solutions for different µs (or W1 s) are the tangency points between these ellipses and lines.CHAPTER 6. Proof There is probably a full version of this proof lost somewhere but the following can be sorted out.D. Revised: December 2. recall the definition of β in (5.E. 1998 . which themselves lie on a line orthogonal (in the sense defined above) to the iso-mean lines. any portfolio collinear with V−1 e is orthogonal to all portfolios with zero expected return. But these are precisely all hedge portfolios. as opposed to meanvariance space.4. Thus. in the set of portfolios costing W0 (the W0 simplex).1 If w is a frontier portfolio and u is a zero mean hedge portfolio. the iso-variance curves are concentric ellipses. Since wMVP is collinear with V−1 1. Some pictures are in order at this stage. We can always choose an orthogonal basis for the portfolio frontier. and shall. we have the following theorem: Theorem 6. the iso-mean curves are parallel lines. Furthermore. the squared length of a weight vector corresponds to the variance of its returns. Q. PORTFOLIO THEORY 121 Two portfolios w1 and w2 are orthogonal with respect to this scalar product ⇐⇒ w1 Vw2 = 0 ⇐⇒ Cov w1 ˜. At this stage. including h. ? has some nice pictures of the frontier in portfolio space.
19) (6.21) where the three components (i. in particular any unit cost p = wMVP and zp (or the original basis. rfq = 0 =⇒ E[˜uq |˜fq ] = 0 ˜ ˜ r r The normal distribution is the only case where this is true.4. Another important exercise is to figure out the relationship between E [˜p ] and r E rzp . then the following decomposition therefore holds: q = fq + uq = βqp p + (1 − βqp ) zp + uq (6.20) is the frontier portfolio with expected return E[˜q ] and cost W0 and uq is a hedge r portfolio with zero expected return. this decomposition is equivalent to the orthogonal projection of q onto the frontier. Revised: December 2.1 shown that any portfolio sharing these properties of uq is uncorrelated with all frontier portfolios.22) Aside: For the frontier portfolio fq to second degree stochastically dominate the arbitrary portfolio q.4. portfolio proportions (scalars/components) 3.122 6.4. and will have to show that Cov ruq . portfolio proportions (orthogonal vectors) 2. ˜ Any two frontier portfolios span the frontier.4. Theorem 6. We will now derive the relation: E[˜q ] − E[˜zp ] = βqp (E[˜p ] − E[˜zp ]) r r r r 5 (6. expected returns (numbers) Note again the parallel between orthogonal portfolio vectors and uncorrelated portfolio returns/payoffs. MATHEMATICS OF THE PORTFOLIO FRONTIER For any frontier portfolio p = wMVP .4. zp and uq ) are mutually orthogonal. We can extend this decomposition to cover 1.. the vector of portfolio proportions) and q is an arbitrary unit cost portfolio.e. wMVP and h). the vectors p. we will need zero conditional expected return on uq . there is a unique unit cost frontier portfolio zp which is orthogonal to p.4. 1998 . returns (uncorrelated random variables) 4.e.5 If p is a unit cost frontier portfolio (i. Geometrically.
4. rp = 0.4.21) has its usual definition from probability theory. rp ] r ˜ Var[˜p ] r (6.2).4.4.4.29) (6. rp ] = Cov rfq .4. Since Cov ruq .4.26) (6. given by (5.4.4. rMVP ] = h V g − r ˜ (6. it can be seen that βqzp = 1 − βqp Taking expected returns in (6. The Global Minimum Variance Portfolio Var[˜g+µh ] = g Vg + 2µ(g Vh) + µ2 (h Vh) r which has its minimum at g Vh (6.25) Cov [˜h . PORTFOLIO THEORY 123 which may be familiar from earlier courses in financial economics and which is quite general and neither requires asset returns to be normally distributed nor any assumptions about preferences.27) (6.24) Thus β in (6. r r r which can be rearranged to obtain (6.4.4.21) yields again: E[˜q ] = βqp E[˜p ] + (1 − βqp )E[˜zp ]. µ=− g Vh h h Vh g Vh.2. 1998 .6 Reversing the roles of p and zp .28) h Vh The latter expression reduces to A/C and the minimum value of the variance is 1/C.4. Also problems working from prices for state contingent claims to returns on assets and portfolios in both single period and multi-period worlds.30) i.22).h Vh =0 = h Vg − h Vh (6. rp = βqp Var[˜p ] r ˜ ˜ ˜ r (6. The global minimum variance portfolio is denoted MVP. rp = Cov rzp .23) or βqp = Cov [˜q . Assign some problems involving the construction of portfolio proportions for various desired βs. taking covariances with rp in (6.CHAPTER 6.e. 6 Revised: December 2. the returns on the portfolio with weights h and the minimum variance portfolio are uncorrelated.21) ˜ ˜ ˜ ˜ ˜ gives: Cov [˜q .
σ 2 ..4.33) 6. µ. r r (6 . (6.4.32) (6. setting a = 0: Cov [˜p .31) (6. rMVP ] − Var[˜MVP ] = 0 r ˜ r and the covariance of any portfolio with MVP is 1/C. The two-moment frontier is generally presented as the graph in mean-variance space of this parabola.124 6. 1998 A C 2 Var[˜h ] r (6.35) . of the rate of return associated with each point on the frontier are related by the quadratic equation: (σ 2 − Var[wMVP ˜]) = φ(µ − E[wMVP ˜])2 . The mean.4.4. and variance. MATHEMATICS OF THE PORTFOLIO FRONTIER MVP Further.4.4. i. h. rMVP ] − (1 − a)Var[˜MVP ] = 0 r r ˜ r Hence. if p is any portfolio. showing the most desirable distributions attainable.e. but the frontier can also be thought of as a plane in portfolio space or as a line in portfolio weight space.34) where the shape parameter φ = C/D represents the variance of the (gross) return on the hedge portfolio. Applying Pythagoras’ theorem to the triangle with vertices at 0. The equations of the frontier in mean-variance and mean-standard deviation space can be derived heuristically using the following stylized diagram illustrating the portfolio decomposition. Figure 3A goes here. the itself and p. The latter interpretations are far more useful when it comes to extending the analysis to higher moments.
Figure 3. the frontier is a hyperbola.4. the frontier is a parabola.43) C i.4.36) (6. the equation of the parabola with vertex at Var[˜p ] = Var[˜MVP ] = r r 1 C A µ = E[˜MVP ] = r C (6.4. recall that: A 2 2 σ = Var[˜MVP ] + µ − r Var[˜h ] r (6.41) (6.42) centre at σ = 0.4.4.4. r Var[˜MVP ] = 0 (the presence of a riskless asset) allows the square root to be taken r on both sides: A σ =± µ− Var[˜h ] r (6.40) C is the equation of the hyperbola with vertex at σ = µ = Var[˜MVP ] = r A C 1 C (6.38) (6. i.11.4. the conic section becomes the pair of lines which are its asymptotes otherwise. Recall two other types of conic sections: Var[˜h ] < 0 (impossible) gives a circle with center (1/C. µ = A/C and asymptotes as indicated.11. Revised: December 2.CHAPTER 6. in mean-standard deviation space.e. Similarly. To see this. Figure 3.2 goes here: indicate position of g on figure.37) is a quadratic equation in µ.1 goes here: indicate position of g on figure. The other half of the hyperbola (σ < 0) has no economic meaning.39) Thus in mean-variance space.e. 1998 . A)
4. For each σ the highest return attainable is along the ray from rf which is tangent to the frontier generated by the risky assets.4.4. Revised: December 2.68) w e + (1 − w 1)rf = µ There is no longer a restriction on portfolio weights. 1998 .70) µ−rf √ if µ ≥ rf . PORTFOLIO THEORY 129 N 6.e. the frontier portfolio solves: min w s.CHAPTER.4. Graphically.t.4. in mean-standard deviation space.71) 6.69) (6.4. these portfolios trace out the ray in σ-µ space emanating from (0. if µ < rf . rf ) and passing through p. In this case. we have: H σ = µ−rf −√ H µ − rf H (6.4 The portfolio frontier in mean-variance space: riskfree and risky assets We can now establish the shape of the mean-standard deviation frontier with a riskless asset.4. and whatever is not invested in the N risky assets is assumed to be invested in the riskless asset. ]. 1 w Vw 2 (6.3 The portfolio frontier in riskfree and risky assets : We now consider the mathematics of the portfolio frontier when there is a riskfree asset.67) (6. r i.
5. The CAPM restrictions are the best known. MARKET EQUILIBRIUM AND THE CAPM On this ray. The frontier is the envelope of all the finite rays through risky portfolios.e. extending as far as the borrowing constraint allows. borrowing. Figure 3D goes here.1 Pricing assets and predicting security returns Need more waffle here about prediction and the difficulties thereof and the properties of equilibrium prices and returns. with margin constraints on borrowing: Figure 3E goes here. the riskless asset is held in combination with the tangency portfolio t. and higher expected returns are achieved by riskless borrowing.5. Consider what happens 1. lower expected returns are achieved by riskless lending.130 6. with differential borrowing and lending rates: Figure 3F goes here. there is a negative weight on the riskless asset — i.5 Market Equilibrium and the Capital Asset Pricing Model 6. This only makes sense for rf < A/C = E[˜mvp ]. 1998 . There is a range of expected returns over which a pure risky strategy provides minimum variance. 2. r Above t. Limited borrowing Unlimited borrowing as allowed in the preceding analysis is unrealistic. 6. We are looking for assumptions concerning probability distributions that lead to useful and parsimonious asset pricing models. At a very basic level. they can be expressed by saying that every Revised: December 2.
? and ? have generalised the distributional conditions. This can be achieved either by restricting preferences to be quadratic or the probability distribution of asset returns to be normal. but can be extended by assuming that return distributions are stable over time. If securities are added in such a way that the average of the variance terms and the average of the covariance terms are stable. PORTFOLIO THEORY 131 investor has mean-variance preferences. 1998 (6.5.3 The zero-beta CAPM Theorem 6.1 (Zero-beta CAPM theorem) If every investor holds a mean-variance frontier portfolio. ∀q. 6. and hence.1) and in equilibrium the relation I i wij W0 = mj Wm0 i=1 ∀j (6.5. 6.5. then the market portfolio. then the portfolio variance approaches the average covariance as a lower bound.CHAPTER 6.5.5.5.5.2) must hold. Revised: December 2. Dividing by Wm0 yields: i W0 wij = mj Wm0 i=1 I ∀j (6. Recall also the limiting behaviour of the variance of the return on an equally weighted portfolio as the number of securities included goes to infinity.3) and thus in equilibrium the market portfolio is a convex combination of individual portfolios. CAPM is basically a single-period model. the CAPM equation E [˜q ] = (1 − βqm ) E [˜zm ] + βqm E [˜m ] r r r holds. is a mean-variance frontier portfolio.4) . m.
6) (6. We can view the market portfolio as a frontier portfolio under two fund separation. it follows that: E[˜q ] = (1 − βqm )E[˜zm ] + βqm E[˜m ] r r r where N (6.5.2 All strictly risk-averse investors hold frontier portfolios if and only if (6.5.5.5) E ruq |˜fq = 0 ∀q ˜ r Note the subtle distinction between uncorrelated returns (in the definition of the decomposition) and independent returns (in this theorem). Recommended reading for this part of the course is ?. MARKET EQUILIBRIUM AND THE CAPM Theorem 6. standard deviation) and the Security. from the economic assumptions of equilibrium and two fund separation: E[˜j ] = (1 − βjm )E[˜zm ] + βjm E[˜m ] r r r (6.8) (6.7) rm = ˜ j=1 mj r j ˜ Cov [˜q .132 6.5. Now we can derive the traditional CAPM. ?. and to talk about Capital Market Line (return v.11) . If p is a frontier portfolio. Normally in equilibrium there is zero aggregate supply of the riskfree asset. 6.5. ? and ?.4 The traditional CAPM Now we add the risk free asset. which will allow us to determine the tangency portfolio. Note that by construction rf = E [˜zt ] . then individuals hold frontier portfolios.9) βqm = This implies for any particular security.5.5. 1998 (6. Since the market portfolio is then on the frontier.5. r Revised: December 2. They are the same only for the normal distribution and related distributions. β).5. rm ] r ˜ Var[˜m ] r (6.10) This relation is the ? Zero-Beta version of the Capital Asset Pricing Model (CAPM). t.5.
The riskless rate is unique by the No Arbitrage Principle. which is impossible in equilibrium. Assume that a riskless asset exists. risky assets are in strictly positive supply. t. synthetic riskless assets. investors have strictly increasing (concave) utility functions then the market/tangency portfolio is efficient. Revised: December 2. t. PORTFOLIO THEORY 133 Theorem 6. Note that the No Arbitrage Principle also allows us to rule out correlation matrices for risky assets which permit the construction of portfolios with zero return variance. with return rf < E[˜mvp ]. must be the market portfolio of risky assets in equilibrium. since otherwise a greedy investor would borrow an infinite amount at the lower rate and invest it at the higher rate. Theorem 6. then the market portfolio of risky assets. ∀q. realism demands that both riskless assets are in zero aggregate supply and hence that all investors hold risky assets only.5. the distributional conditions for two fund separation are satisfied. and hence. If all individuals face this situation in equilibrium.4 is sometimes known as the Sharpe-Lintner Theorem.e.5. then the tangency portfolio. m. 1998 . r If the distributional conditions for two fund separation are satisfied.5.12) (6. 2. We can also think about what happens the CAPM if there are different riskless borrowing and lending rates (see ?). i. (6. Figure 4A goes here.5.13) The next theorem relates to the mean-variance efficiency of the market portfolio. and 3.5. Theorem 6.5 If 1. Theorem 6. t. We know then that for any portfolio q (with or without a riskless component): E[˜q ] − rf = βqm (E[˜m ] − rf ) r r This is the traditional Sharpe-Lintner version of the CAPM. is the tangency portfolio.3 (Separation Theorem) The risky asset holdings of all investors who hold mean-variance frontier portfolios are in the proportions given by the tangency portfolio.CHAPTER 6.4 (Traditional CAPM Theorem) If every investor holds a meanvariance frontier portfolio.5. the traditional CAPM equation E [˜q ] = (1 − βqm ) rf + βqm E [˜m ] r r holds.
5. rj ˜ ˜ (6. Assume there is a riskless asset and returns are multivariate normal (MVN). rj r (6.5.E.20) θi where ˜ −E[ui (Wi )] θi ≡ (6.5. MARKET EQUILIBRIUM AND THE CAPM Proof By Jensen’s inequality and monotonicity.5.14) for E[u(W0 (1 + r))] ≤ u(E[W0 (1 + r)]) ˜ ˜ < u(W0 (1 + rf )) (6.19) using the definition of covariance and Stein’s lemma for MVN distributions.16) Hence the expected returns on all individuals’ portfolios exceed rf .5. Q. N i ˜ ˜ Cov Wi .5.D. rj = Cov W0 k=1 wik rk . rj (6.17) (6. Now we can calculate the risk premium of the market portfolio. Since N i ˜ Wi = W0 (1 + rf + k=1 wik (˜k − rf )) r (6.5.134 6. rj r ˜ ˜ ˜ ˜ = E[ui (Wi )]E[˜j − rf ] + E[ui (Wi )]Cov Wi . In some situations.5. Recall the first order conditions for the canonical portfolio choice problem: ˜ r 0 = E[ui (Wi )(˜j − rf )] ∀ i.15) (6.5. Rearranging: E[˜j − rf ] r ˜ ˜ = Cov Wi . the riskless asset dominates any portfolio with E[˜] < rf r (6. The risk premium on the market portfolio must adjust in equilibrium to give market-clearing. dropping non-stochastic terms.5. 1998 .18) (6.22) we have. CAPM gives a relation between the risk premia on individual assets and the risk premium on the market portfolio.5. It follows that the expected return on the market portfolio must exceed rf .23) Revised: December 2. the risk premium on the market portfolio can be written in terms of investors’ utility functions.21) ˜ E[ui (Wi )] is the i-th investor’s global absolute risk aversion. j ˜ ˜ ˜ = E[ui (Wi )]E[˜j − rf ] + Cov ui (Wi ).
We conclude with some examples. in equilibrium.26) or E[˜j − rf ] = ( r i=1 −1 θi )−1 Wm0 Cov [˜m . Quadratic utility: ui (z) = ai z − implies: I I −1 θi )−1 i=1 bi 2 z 2 ai .CHAPTER 6.5. Equivalently. 1998 . Now take the average over j weighted by market portfolio weights: I E[˜m − rf ] = ( r i=1 −1 (]˜ θi )−1 Wm0 Var[˜ rm ) (6.5. 2. the risk premium on the j-th asset is the product of the aggregate relative risk aversion of the economy and the covariance between the return on the j-th asset and the return on the market.e.5.30) ( = i=1 ai ˜ − E[Wi ] bi (6.25) (6.31) This result can also be derived without assuming MVN and using Stein’s lemma.24) Summing over i.5. rj ] r ˜ I (6. Revised: December 2..29) and hence the market portfolio is efficient.28) ( a−1 )−1 > 0 i (6.5. rj ] r ˜ i. Negative exponential utility: ui (z) = − implies: I I −1 θi )−1 = ( i=1 i=1 1 exp{−ai z} ai ai > 0 (6.27) i. N E[˜j − rf ] r i = Cov W0 wik rk . PORTFOLIO THEORY Hence.e. the risk premium on the market is the product of the aggregate relative risk aversion of the economy and the variance of the return on the market. 1. the return to variability of the market equals the aggregate relative risk aversion. in equilibrium. rj ˜ ˜ θi k=1 135 (6.5.5. b i > 0 −1 (6..
MARKET EQUILIBRIUM AND THE CAPM Revised: December 2.5. 1998 .136 6.
] 7. whose price. St .2.1 The binomial option pricing model This still has to be typed up. 7. In 1997.2 The Black-Scholes option pricing model Fischer Black died in 1995.CHAPTER 7. ˜ whose price. It follows very naturally from the stuff in Section 5. INVESTMENT ANALYSIS 137 Chapter 7 INVESTMENT ANALYSIS 7. follows the z t=0 differential equation: dBt = rBt dt. follows the stochastic differential equation: ˜ ˜ ˜ z dSt = µSt dt + σ St d˜t .2 Arbitrage and the Pricing of Derivative Securities 7.se/announcement-97/economy97.’ See. a bond. Revised: December 2.nobel. and a call option on the stock with strike price X and maturity date T .html Black and Scholes considered a world in which there are three assets: a stock.4. Bt .1 Introduction [To be written. 1998 . where {˜t }T is a Brownian motion process. Myron Scholes and Robert Merton were awarded the Nobel Prize in Economics ‘for a new method to determine the value of derivatives.2.
ARBITRAGE AND PRICING DERIVATIVE SECURITIES They showed how to construct an instantaneously riskless portfolio of stocks and options. t). t) = SN (d .2. derived the Black-Scholes partial differential equation which must be satisfied by the option price.2) S ln X + r + 1 σ 2 τ √ 2 = . ˜ ˜ ˜ Let the price of the call at time t be Ct . Let τ = T − t be the time to maturity.138 7. assuming that the principle of no arbitrage holds.1) (7.. Then we claim that the solution to the Black-Scholes equation is: √ C(S. Note first that z 1 2 1 √ e− 2 t dt N (z) ≡ −∞ 2π and hence by the fundamental theorem of calculus 1 2 1 N (z) ≡ √ e− 2 z . 0} at maturity.2. τ )) − Xe−rτ N d (S. 1998 . where N (·) is the cumulative distribution function of the standard normal distribution and d (S. and hence. T ) = (S − X)+ . τ ) − σ τ . 2π Revised: December 2. Guess that Ct = C(St . The option pays (ST − X)+ ≡ max {ST − X.2.
τ ) − σ τ 1 (7. we have: ∂C ∂t = SN (d (S.3) (7.2. τ )) + N (d (S. τ )) .2. τ )) . τ ) − σ τ ∂S = N (d (S.2. τ ) = SN (d (S. τ ) = N (d (S. τ ) − Xe−rτ × ∂t √ σ ∂d (S.2. τ )) X S rτ e N (d (S.8) (7.2.2. = ∂S Sσ τ √ d (S.2.5) (7.2.CHAPTER 7.2. τ ) with respect to S and t.9) √ σ = −SN (d (S. τ ) 1 √ . τ )) ∂2C ∂S 2 ∂d (S.τ )σ = e− 2 σ = 1 2τ 2 √ τ N (d (S. 1998 . τ ) ∂d (S. ∂S ∂C 1 ∂ 2 C 2 2 ∂C + σ S + rS − rC ∂t 2 ∂S 2 ∂S (7. τ )) (7. τ )) N ∂d (S.4) Note also that N = e− 2 σ τ ed(S. τ )) √ − Xe−rτ rN d (S.2. τ ) − σ τ 2 τ ∂C ∂S ∂d (S. τ ) d (S. For the last step in this proof.6) (7.11) (7. we will need the partials of d (S. X Using these facts and the chain rule. INVESTMENT ANALYSIS 139 which of course is the corresponding probability density function.10) (7.7) S (r+ 1 σ2 )τ e 2 N (d (S. which are: r + 1 σ2 ∂d (S. τ )) ∂S √ ∂d (S. τ ) − σ τ (7.12) Substituting these expressions in the original partial differential equation yields: Revised: December 2. τ ) − σ τ − √ ∂t 2 τ √ + rN d (S. τ ) −Xe−rτ N d (S. τ ) 2 √ =− = − ∂t ∂τ 2σ τ and ∂d (S.
C (S. but also both by the time at which they are consumed and by the state of nature in which they are consumed.140 7.4 respectively.3 Multi-period Investment Problems In Section 4. τ ) → ±∞ according as S > X or S < X. 7. (7. τ )) √ + σ 2 S 2 2 τ 2 ∂S √ −rτ −rτ +N d (S. This is covered in one of the problems. τ ) − σ τ −Xe r + rXe (7. τ ) = N (d (S. For the moment this brief introduction is duplicated in both chapters. MULTI-PERIOD INVESTMENT PROBLEMS −Sσ 1 ∂d (S. which illustrates the link between prices and interest rates in a multiperiod model. In the former case. T ) = S − X. T ) = 0. term structure. it was pointed out that the objects of choice can be differentiated not only by their physical characteristics. Discrete time multi-period investment problems serve as a stepping stone from the single period case to the continuous time case. C (S.2. τ )) (rS − rS) + N (d (S. As τ → 0. and in the latter case. 7.13) = 0.3. d (S.2.4 Continuous Time Investment Problems ? Revised: December 2. The main point to be gotten across is the derivation of interest rates from equilibrium prices: spot rates.2. forward rates. etc. so the boundary condition is indeed satisfied.14) The boundary condition should also be checked. These distinctions were suppressed in the intervening sections but are considered again in this section and in Section 5. The multi-period model should probably be introduced at the end of Chapter 4 but could also be left until Chapter 7. 1998 .
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview. | https://www.scribd.com/doc/60423664/Mathematical-Economics-and-Finance | CC-MAIN-2017-04 | refinedweb | 34,337 | 69.07 |
Could Not Find or Load Main Class Error in Java
Errors and exceptions are very common while working with Java or any other programming language. The could not find or load main class error is very common and can occur frequently. This error simply means that the JVM is unable to find the main class. Let's look at the reasons why this error occurs and try to solve this error.
Here, we have the following code in a java file called ErrorDemo.java.
public class ErrorDemo { public static void main(String[] args) { System.out.println("Error Fixed"); } }
We can compile this code by using the javac command and provide the java file name.
The above command generates a .class file that has the same name as the class with the main method.
We can run the code and view the output by using the java command. We can see that everything is working fine, and we get the expected output.
Error When Incorrect Class File Name
Let's deliberately mess things up to view the could not find or load main class error. This error can occur if we try to run a java program but do not pass the correct .class file name.
Another thing to remember is that we use the javac command to compile a java file, and we need to add the .java extension to the file name when using it. But when running the .class file, do not pass the .class extension with the class name. This will also return the same error.
Error When Incorrect Directory
Note that we must run the java command for the .class file in the correct folder. For example, if we navigate to the desktop(using cd .. command), then again, we will get this same error.
Error Incorrect Package Names
A package in Java is a set of similar classes or interfaces that are grouped together for easy access. When trying to run a .class file that is present in a package, we need to use the package name along with the class name.
Let's alter our program so that the class is included in a package.
package p; public class ErrorDemo { public static void main(String[] args) { System.out.println("Error Fixed"); } }
Now, when we run the javac command with the -d and a dot(.) then the .class file will be created in the package p.
Let's try to run this class file as we did in the above examples. But we get an error now.
To rectify it, we also need to pass the package name while running the file.
Remember that we are running the above command from the parent directory and not from the package directory p.
Classpaths
As seen above, this error can also occur if we are running the .class file from a directory that does not contain this file. This is because, by default, the JVM will search for the class file in our current directory. A classpath informs the JVM to look for the file in a particular folder. Defining a classpath helps us to run a java class file from some other directory. Use the -classpath or -cp option with the java command to pass the classpath when running the .class file.
For example, if our .class file is present in a folder called errors and we are trying to run the code from the desktop, then we will get the following error.
We can use the classpath to rectify this error.
Frequently Asked Questions
How do we run a file in Java?
As shown at the beginning of this article, a Java file is first compiled by using the javac command, and this generates a .class file. Next, we can run this .class file by using the java command. Read more.
How is path different from classpath?
The path is the location where we can find the executable file with an extension like .exe or .jar, and classpath is the location where we can find the .class files. By default, the classpath is set to the current directory where we are working.
What is ClassNotFoundException?
The ClassNotFoundException occurs when the JVM cannot find the class that we are trying to run in the mentioned classpath.
What to do when we get the following error after running the javac command: javac is not recognized as an internal and external command?
This error occurs because we have not set the PATH environment variable. We can set it temporarily by using the following command. Pass the path of the bin folder of your JDK.
C:\Users\user1>set path=C:\Program Files\Java\jdk1.8.0_144\bin
Read more about the Java environment setup.
Summary
The could not find or load main class error is very common in Java. Most of the time, this occurs because we are writing the class name incorrectly or we are adding unnecessary .class extensions while running. If the class is part of a package, then we must provide the fully qualified class name when running the code to avoid errors. | https://www.studytonight.com/java-examples/could-not-find-or-load-main-class-error-in-java | CC-MAIN-2022-05 | refinedweb | 847 | 75.3 |
While developing a Silverlight WP7 app, it’s often handy to display demo content in design view so you have an idea of how real content will look. This is one of the biggest advantages of the MVVM architecture – separate data for design and runtime, as demonstrated below:
Fortunately Alex Pendleton has created a Lorem Ipsum generator in .NET 2.0 called NLipsum. Unfortunately it doesn’t work directly on WP7 without a little massaging. Fear not, for I have done the work for you! You can download the project files here, or the binary here.
The dll contains some raw XML files for generating the lipsums, and these are loaded at runtime. For this reason it’s best not to have the binary included with release builds of your app, lest you get a slower startup and increased memory usage. I’ve put in some caching that should mean you can call the generator as much as you like without worrying too much about performance. To use the generator, include it into your project and import the namespace:
using NLipsum.Core;
Then, it’s a simple case of calling the generator to do your bidding:
return LipsumGenerator.Generate(1, Features.Paragraphs, null, Lipsums.LoremIpsum); return LipsumGenerator.Generate(1, Features.Sentences, null, Lipsums.TheRaven); return LipsumGenerator.Generate(2, Features.Words, null, Lipsums.LeMasque);
If the content doesn’t show in design view, try re-building your app. This will refresh your design-time databinding as well as bringing in any changes you’ve made to the model (such as adding the lorem ipsum).
Thanks a lot for using and porting NLipsum. I hope you found it useful. Let me know if you have any feedback. | http://dan.clarke.name/2011/05/nlipsum-for-windows-phone-7-auto-generate-lorem-ipsum-for-wp7/ | CC-MAIN-2017-22 | refinedweb | 285 | 58.79 |
prelude-compat
Provide Prelude and Data.List with fixed content across GHC versions
See all snapshots
prelude-compat appears in
prelude-compat-0.0.0.1@sha256:1c70766125e79600542e58597b92322abf5a48609933784bbee5adcd3a4f1cc5,3258
Module documentation for 0.0.0.1
This package allows you to write warning-free code
that compiles with versions of
base before and after AMP and FTP,
that is,
base before and beginning with 4.8, respectively,
and GHC before and beginning with 7.10, respectively.
It serves three purposes:
Prevent you from name clashes of FTP-Prelude with locally defined functions having names like
<*>,
join,
foldMap.
Prevent you from redundant import warnings if you manually import
Data.Monoidor
Control.Applicative.
Fix list functions to the list type, contrarily to the aim of the FTP. This way you are saved from
length (2,1) == 1and
maximum (2,1) == 1, until you import
Data.Foldable.
You should add
import Prelude2010 import Prelude ()
to your modules.
This way, you must change all affected modules.
If you want to avoid this you may try the
prelude2010 package
or if you already import Prelude explicitly, you may try to add
Default-Extensions: CPP, NoImplicitPrelude CPP-Options: -DPrelude=Prelude2010
to your Cabal file.
In my opinion, this is the wrong way round.
The presented Prelude2010 module should have been the one for GHC-7.10
and the Prelude with added and generalized list functions
should have been an additionally PreludeFTP,
preferably exported by a separate package
like all other alternate Prelude projects.
But the purpose of the FTP was to save some import statements
at the expense of blowing up the
Foldable class
and prevent simple ways to write code that works
with GHC version before and starting with GHC-7.10
and that does not provoke warnings.
Related packages:
'base-compat': The opposite approach - Make future function definitions available in older GHC versions.
haskell2010: Defines the Prelude for Haskell 2010. Unfortunately,
haskell2010is not available anymore since GHC-7.10, because of the AMP.
'numeric-prelude': It is intended to provide a refined numeric class hierarchy but it also provides a non-numeric subset of the Prelude that is more stable than the one of
base. | https://www.stackage.org/lts-7.24/package/prelude-compat-0.0.0.1 | CC-MAIN-2021-25 | refinedweb | 361 | 56.45 |
Hi Michael, could you please give some feedback? On Monday, April 17, 2017 11:35 AM, Wei Wang wrote: > On 04/15/2017 05:38 AM, Michael S. Tsirkin wrote: > > On Fri, Apr 14, 2017 at 04:37:52PM +0800, Wei Wang wrote: > >> On 04/14/2017 12:34 AM, Michael S. Tsirkin wrote: > >>> On Thu, Apr 13, 2017 at 05:35:05PM +0800, Wei Wang wrote: > >>> > >>> So we don't need the bitmap to talk to host, it is just a data > >>> structure we chose to maintain lists of pages, right? > >> Right. bitmap is the way to gather pages to chunk. > >> It's only needed in the balloon page case. > >> For the unused page case, we don't need it, since the free page > >> blocks are already chunks. > >> > >>> OK as far as it goes but you need much better isolation for it. > >>> Build a data structure with APIs such as _init, _cleanup, _add, > >>> _clear, _find_first, _find_next. > >>> Completely unrelated to pages, it just maintains bits. > >>> Then use it here. > >>> > >>> > >>>> static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES; > >>>> module_param(oom_pages, int, S_IRUSR | S_IWUSR); > >>>> MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); @@ -50,6 > >>>> +54,10 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); > >>>> static struct vfsmount *balloon_mnt; > >>>> #endif > >>>> +/* Types of pages to chunk */ > >>>> +#define PAGE_CHUNK_TYPE_BALLOON 0 > >>>> + > >>> Doesn't look like you are ever adding more types in this patchset. > >>> Pls keep code simple, generalize it later. > >>> > >> "#define PAGE_CHUNK_TYPE_UNUSED 1" is added in another patch. > > I would say add the extra code there too. Or maybe we can avoid adding > > it altogether. > > I'm trying to have the two features( i.e. "balloon pages" and "unused pages") > decoupled while trying to use common functions to deal with the commonalities. > That's the reason to define the above macro. > Without the macro, we will need to have separate functions, for example, > instead of one "add_one_chunk()", we need to have > add_one_balloon_page_chunk() and add_one_unused_page_chunk(), and some > of the implementations will be kind of duplicate in the two functions. > Probably we can add it when the second feature comes to the code. > > > > >> Types of page to chunk are treated differently. Different types of > >> page chunks are sent to the host via different protocols. > >> > >> 1) PAGE_CHUNK_TYPE_BALLOON: Ballooned (i.e. inflated/deflated) pages > >> to chunk. For the ballooned type, it uses the basic chunk msg format: > >> > >> virtio_balloon_page_chunk_hdr + > >> virtio_balloon_page_chunk * MAX_PAGE_CHUNKS > >> > >> 2) PAGE_CHUNK_TYPE_UNUSED: unused pages to chunk. It uses this miscq > >> msg > >> format: > >> miscq_hdr + > >> virtio_balloon_page_chunk_hdr + > >> virtio_balloon_page_chunk * MAX_PAGE_CHUNKS > >> > >> The chunk msg is actually the payload of the miscq msg. > >> > >> > > So just combine the two message formats and then it'll all be easier? > > > > Yes, it'll be simple with only one msg format. But the problem I see here is > that > miscq hdr is something necessary for the "unused page" > usage, but not needed by the "balloon page" usage. To be more precise, struct > virtio_balloon_miscq_hdr { > __le16 cmd; > __le16 flags; > }; > 'cmd' specifies the command from the miscq (I envision that miscq will be > further used to handle other possible miscellaneous requests either from the > host or to the host), so 'cmd' is necessary for the miscq. But the inflateq is > exclusively used for inflating pages, so adding a command to it would be > redundant and look a little bewildered there. > 'flags': We currently use bit 0 of flags to indicate the completion ofa > command, > this is also useful in the "unused page" usage, and not needed by the "balloon > page" usage. > >>>> +#define MAX_PAGE_CHUNKS 4096 > >>> This is an order-4 allocation. I'd make it 4095 and then it's an > >>> order-3 one. > >> Sounds good, thanks. > >> I think it would be better to make it 4090. Leave some space for the > >> hdr as well. > > And miscq hdr. In fact just let compiler do the math - something like: > > (8 * PAGE_SIZE - sizeof(hdr)) / sizeof(chunk) > Agree, thanks. > > > > > I skimmed explanation of algorithms below but please make sure code > > speaks for itself and add comments inline to document it. > > Whenever you answered me inline this is where you want to try to make > > code clearer and add comments. > > > > Also, pls find ways to abstract the data structure so we don't need to > > deal with its internals all over the code. > > > > > > .... > > > >>>> { > >>>> struct scatterlist sg; > >>>> + struct virtio_balloon_page_chunk_hdr *hdr; > >>>> + void *buf; > >>>> unsigned int len; > >>>> - sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns); > >>>> + switch (type) { > >>>> + case PAGE_CHUNK_TYPE_BALLOON: > >>>> + hdr = vb->balloon_page_chunk_hdr; > >>>> + len = 0; > >>>> + break; > >>>> + default: > >>>> + dev_warn(&vb->vdev->dev, "%s: chunk %d of unknown > pages\n", > >>>> + __func__, type); > >>>> + return; > >>>> + } > >>>> - /* We should always be able to add one buffer to an empty > >>>> queue. */ > >>>> - virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL); > >>>> - virtqueue_kick(vq); > >>>> + buf = (void *)hdr - len; > >>> Moving back to before the header? How can this make sense? > >>> It works fine since len is 0, so just buf = hdr. > >>> > >> For the unused page chunk case, it follows its own protocol: > >> miscq_hdr + payload(chunk msg). > >> "buf = (void *)hdr - len" moves the buf pointer to the miscq_hdr, > >> to send the entire miscq msg. > > Well just pass the correct pointer in. > > > OK. The miscq msg is > { > miscq_hdr; > chunk_msg; > } > > We can probably change the code like this: > > #define CHUNK_TO_MISCQ_MSG(chunk) (chunk - sizeof(struct > virtio_balloon_miscq_hdr)) > > switch (type) { > case PAGE_CHUNK_TYPE_BALLOON: > msg_buf = vb->balloon_page_chunk_hdr; > msg_len = sizeof(struct virtio_balloon_page_chunk_hdr) + > nr_chunks * sizeof(struct > virtio_balloon_page_chunk_entry); > break; > case PAGE_CHUNK_TYPE_UNUSED: > msg_buf = CHUNK_TO_MISCQ_MSG(vb->unused_page_chunk_hdr); > msg_len = sizeof(struct virtio_balloon_miscq_hdr) + > sizeof(struct > virtio_balloon_page_chunk_hdr) + > nr_chunks * sizeof(struct > virtio_balloon_page_chunk_entry); > break; > default: > dev_warn(&vb->vdev->dev, "%s: chunk %d of unknown pages\n", > __func__, type); > return; > } > > > > >> Please check the patch for implementing the unused page chunk, it > >> will be clear. If necessary, I can put "buf = (void *)hdr - len" from > >> that patch. > > Exactly. And all this pointer math is very messy. Please look for ways > > to clean it. It's generally easy to fill structures: > > > > struct foo *foo = kmalloc(..., sizeof(*foo) + n * sizeof(foo->a[0])); > > for (i = 0; i < n; ++i) > > foo->a[i] = b; > > > > this is the kind of code that's easy to understand and it's obvious > > there are no overflows and no info leaks here. > > > OK, will take your suggestion: > > struct virtio_balloon_page_chunk { > struct virtio_balloon_page_chunk_hdr hdr; > struct virtio_balloon_page_chunk_entry entries[]; }; > > > >>>> + len += sizeof(struct virtio_balloon_page_chunk_hdr); > >>>> + len += hdr->chunks * sizeof(struct virtio_balloon_page_chunk); > >>>> + sg_init_table(&sg, 1); > >>>> + sg_set_buf(&sg, buf, len); > >>>> + if (!virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL)) { > >>>> + virtqueue_kick(vq); > >>>> + if (busy_wait) > >>>> + while (!virtqueue_get_buf(vq, &len) && > >>>> + !virtqueue_is_broken(vq)) > >>>> + cpu_relax(); > >>>> + else > >>>> + wait_event(vb->acked, virtqueue_get_buf(vq, > >>>> &len)); > >>>> + hdr->chunks = 0; > >>> Why zero it here after device used it? Better to zero before use. > >> hdr->chunks tells the host how many chunks are there in the payload. > >> After the device use it, it is ready to zero it. > > It's rather confusing. Try to pass # of chunks around in some other > > way. > > Not sure if this was explained clearly - we just let the chunk msg hdr > indicates > the # of chunks in the payload. I think this should be a pretty normal usage, > like > the network UDP hdr, which uses a length field to indicate the packet length. > > >>>> + } > >>>> +} > >>>> + > >>>> +static void add_one_chunk(struct virtio_balloon *vb, struct virtqueue > >>>> *vq, > >>>> + int type, u64 base, u64 size) > >>> what are the units here? Looks like it's in 4kbyte units? > >> what is the "unit" you referred to? > >> This is the function to add one chunk, base pfn and size of the chunk > >> are supplied to the function. > >> > > Are both size and base in bytes then? > > But you do not send them to host as is, you shift them for some reason > > before sending them to host. > > > Not in bytes actually. base is a base pfn, which is the starting address of > the > continuous pfns. Size is the chunk size, which is the number of continuous > pfns. > > They are shifted based on the chunk format we agreed before: > > -------------------------------------------------------- > | Base (52 bit) | Rsvd (12 bit) | > -------------------------------------------------------- > -------------------------------------------------------- > | Size (52 bit) | Rsvd (12 bit) | > -------------------------------------------------------- > > > Here, the pfn will be the balloon page pfn (4KB).In this way, the host doesn't > need to know PAGE_SIZE of the guest. > > > > >>>> + if (zero >= end) > >>>> + chunk_size = end - one; > >>>> + else > >>>> + chunk_size = zero - one; > >>>> + > >>>> + if (chunk_size) > >>>> + add_one_chunk(vb, vq, > PAGE_CHUNK_TYPE_BALLOON, > >>>> + pfn_start + one, > >>>> chunk_size); > >>> Still not so what does a bit refer to? page or 4kbytes? > >>> I think it should be a page. > >> A bit in the bitmap corresponds to a pfn of a balloon page(4KB). > > That's a waste on systems with large page sizes, and it does not look > > like you handle that case correctly. > > OK, I will change the bitmap to be PAGE_SIZE based here, instead of > BALLOON_PAGE_SIZE based. When convert them into chunks, making it based > on BALLOON_PAGE_SIZE. > > > Best, > Wei > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: address@hidden > For additional commands, e-mail: address@hidden | https://lists.gnu.org/archive/html/qemu-devel/2017-04/msg04975.html | CC-MAIN-2019-30 | refinedweb | 1,414 | 72.66 |
Welcome to part 2 of my C Video Tutorial. If you missed part 1 check it out first.
In this part of my C Tutorial I will cover: Compiling Options, Relational Operators, Logical operators, If, Else, Else If, Conditional Operator, Sizeof(), Bytes, Bits, While, Do While, For, Break, Continue and more…
All of the code follows the video below. It is heavily commented to help you learn. Feel free to leave questions below.
If you like videos like this it helps to tell Google+ with a click here
Code From the Video
#include <stdio.h> void main(){ // Use gcc ProgramName.c -o ProgramName to define the name // for your program instead of using a.out // Execute with ./ProgramName // There are many ways to compare data in c // >, <, ==, >=, <=, != // Only compare values with the same data type // To compare 2 unlike types perform a cast // A relational operator always evaluates to 1 for true, or 0 for false printf("\n"); int num1 = 1, num2 = 2; printf("Is 1 > 2 : %d\n\n",num1 > num2); // If is used to compare values and perform different actions // depending on those comparisons. // You can check multiple conditions with else if and // you can define a default with else // Once one condition is true the code in between the curly // brackets that follows is executed and then no other // condition that follows is checked. if(num1 > num2){ printf("%d is greater then %d\n\n", num1, num2); } else if(num1 < num2){ printf("%d is less then %d\n\n", num1, num2); } else { printf("%d is equal to %d\n\n", num1, num2); } // Logical operators are used to combine the above relational // operators. && - And, || - Or, ! - Not // Computers are Logical they only understand 1s and 0s // Relational operators check how values relate int custAge = 38; if(custAge > 21 && custAge < 35) printf("They are welcome\n\n"); else printf("They are not welcome\n\n"); // ! - Not turns a 1 to 0 and vice versa // Surround relations with parentheses when using not // This won't work !custAge > 21 printf("! turns a true into false : %d\n\n", !1); // Bob deserves a raise if he has missed less then 10 days work // and has over 30000 in sales or has signed up 30 new customers int bobMissedDays = 8, bobTotalSales = 24000, bobNewCust = 32; if(bobMissedDays < 10 && bobTotalSales > 30000 || bobNewCust > 30){ printf("Bob gets a raise\n\n"); } else { printf("Bob doesn't get a raise\n\n"); } // The Conditional Operator is great for replacing simple if statements // (comparison) ? happensIfTrue : happensIfFalse; // Don't worry about char* for now char* legalAge = (custAge > 21) ? "true" : "false"; printf("Is the customer of legal age? %s\n\n", legalAge); // You can change printf with a conditional operator directly int numOfProducts = 10; printf("I bought %s products\n\n", (numOfProducts > 1) ? "many" : "one"); // How much space are data types taking up? printf("A char takes up %d bytes\n\n", sizeof(char)); printf("An int takes up %d bytes\n\n", sizeof(int)); printf("A long int takes up %d bytes\n\n", sizeof(long int)); printf("A float takes up %d bytes\n\n", sizeof(float)); printf("A double takes up %d bytes\n\n", sizeof(double)); // What is a byte, bit, etc? // A Bit is short for Binary Digit and can be either a 1 or 0 // A Byte is generally considered to be 8 Bits int bigInt = 2147483648; printf("I'm bigger then you may have heard %d\n\n", bigInt); // Calculate the maximum value based on bits int numberHowBig = 0; printf("How Many Bits? "); scanf(" %d", &numberHowBig); printf("\n\n"); // 0 : Print what was given // 1 : Print what was given // 2 : 1 + 2 = 3 (Binary : 11) // 3 : 3 + 4 = 7 (Binary : 111) // 4 : 7 + 8 = 15 (Binary : 1111) // Initialize the incrementor before the while loop int myIncrementor = 1, myMultiplier = 1, finalValue = 1; while(myIncrementor < numberHowBig){ myMultiplier *= 2; finalValue = finalValue + myMultiplier; // Test to track and make sure I'm right printf("finalValue: %d myMultiplier: %d myIncrementor: %d\n\n", finalValue, myMultiplier, myIncrementor); // Don't forget to increment so the while loop ends // when the condition becomes false (Infinite Loop Otherwise) myIncrementor++; } // Handle if user enters 0 or 1 if ((numberHowBig == 0) || (numberHowBig == 1)){ printf("Top Value: %d\n\n", numberHowBig); } else { printf("Top Value: %d\n\n", finalValue); } int secretNumber = 10, numberGuessed = 0; // Infinite while loop while(1){ printf("Guess My Secret Number: "); scanf(" %d", &numberGuessed); if(numberGuessed == 10){ printf("You Got It"); // break is used to throw you the the first // line of code after the loop break; } } printf("\n\n"); // You use a Do While Loop when you need something done // at least once, but don't know the number of times you // may need to loop char sizeOfShirt; do { printf("What Size of Shirt (S,M,L): "); scanf(" %c", &sizeOfShirt); } while(sizeOfShirt != 'S' && sizeOfShirt != 'M' && sizeOfShirt != 'L'); // When you know up front exactly how many times you // need to loop then use a for loop // for(define incrementor; define condition; increment incrementor) for(int counter = 0; counter <= 20; counter++){ printf("%d ", counter); } // If you use the above code make sure you compile with // gcc -std=c99 CTutorial2.c -o CTutorial2 // Previous to C99 you had to initialize outside of the for // loop instead of using int counter = 0; // To use C99 though main must have a return type printf("\n\n"); // Print only odd numbers for(int counter = 0; counter <= 40; counter++){ // continue is used to skip this iteration of the loop // and instead continue with the next loop cycle if((counter % 2) == 0) continue; printf("%d ", counter); } }
Hello Derek,
Thank you very much for these video tutorials; they are turning up to a very comprehensive introduction to C. I had a suggestion that I hope you consider.
Instead of diverging off to C++ or Java; it might be a good idea to demonstrate the Go programming language as it has a cleaner but familiar syntax; has the speed and efficiency of C and has some good primitives for multi-core programming among other features.
Also, there are very few good introductions to Go; who else than you to start things off 🙂
Thanks again for everything
manoj
Hello Manoj,
Ill see what I can do about Go. I didn’t know anyone was interested in it. Ill be covering individual languages like c, c++, Ruby and I see about Go while I continue with Android.
Thanks for the request 🙂
Derek
Sure Derek; sounds good. Thank you again for all the great tutorials
best regards,
manoj
You’re very welcome 🙂
i want to say you are very clever i love derek
good luck
ahmed
Thank you very much Ahmed 🙂 I love you guys as well.
Hi Derek,
I really love your tutorials, thanks a lot for them.
I’d like to ask you to do an algorithm tutorial, stuff like quick-find, union, merge-sort and the like.
Many thanks, keep up the great job!
Hi, Thank you 🙂 I’m glad you enjoy them. I did an algorithm tutorial for Java and I’ll cover a bunch in this tutorial as well. Thank you for the request
thank you ((: nice expression…
was very helpful…
You’re very welcome 🙂 | http://www.newthinktank.com/2013/07/c-video-tutorial-2/ | CC-MAIN-2016-50 | refinedweb | 1,194 | 52.53 |
On Mar 24, 2006, at 16:48:47, Nix wrote:> On 24 Mar 2006, Rob Landley suggested tentatively:>> On Friday 24 March 2006 1:51 pm, Kyle Moffett wrote:>>>.>> I concur. The purpose of this thing is by definition to provide > libcs with the kernel/user interface stuff they need in order for > userspace programs to be compiled. There's no point defining a new > interface because there is a massive quantity of *existing* code > out there that we must work with. (Plus, it can be, uh, difficult > to get changes of this nature into glibc in particular, and glibc > is the 300-pound gorilla in this particular room. If the headers > don't have working with it as a goal, they are pointless.)Hmm, I didn't really explain my idea very well. Let me start with a list of a facts. If anybody disagrees with any part of this, please let me know.1) The <linux/*.h> headers include a lot of information essential to compiling userspace applications and libraries (libcs in particular). That same information is also required while building the kernel (IE: The ABI).2) Those headers have a lot of declarations and definitions which must *not* be present while compiling userspace applications, and is basically kernel-only stuff.3) Glibc is extremely large and complex 500-pound gorilla and contains an ugly build process and a lot of static definitions in its own header files that conflict with the definitions in the kernel headers.4) UML runs into a lot of problems when glibc's headers and the native kernel headers headers conflict.Here's some of my opinions about this:1) Trying to create and maintain 2 separate versions of an ABI as large and complex as the kernel<=>userspace ABI across new versions and features would be extremely difficult and result in subtle bugs and missing features, even over a short period of time.2) Ideally there should be three distinct pieces, the kernel, the ABI, and userspace. Compiling either the kernel or userspace requires the ABI, but the ABI depends only on the compiler.3) Breaking any compatibility is bad4) Trying to continue to maintain the glibc custom-header-file status-quo as more APIs and architectures get added to the kernel is going to become an increasingly difficult and tedious task.My proposal (which I'm working on sample patches for) would be to divide up the kernel headers into 2 parts. The first part would be <kabi/*.h>, and the second would be all the traditional kernel-only headers. The kabi headers would *only* define things that begin with the prefix __kabi_. This would mean that the kabi headers have no risk of namespace contamination with anything else existing in the kernel or userspace, and since they would depend only on the compiler, they would be useable anywhere.The second step would be to convert the traditional linux header to include the corresponding kabi header, then redefine its own structures and defines in terms of those in the kabi header. This would provide complete backwards compatibility to all kernel code, as well as to anything that currently compiles using the existing kernel headers. The entire rest of the <linux/*.h> header file would be wrapped in #ifdef __KERNEL__, as it should not be needed by anything in userspace.In the process of those two steps, we would relocate many of the misplaced "#ifdef __KERNEL__" and "#endif /* __KERNEL__ */". The kabi headers should not mention __KERNEL__ at all, and the linux/* headers should be almost completely wrapped in __KERNEL__ ifdefs. That should be enough to make klibc build correctly, although from the description glibc needs significantly more work.Once a significant portion of the kernel headers have been split that way (preserving complete backwards compatibility), external projects _may_ be converted to #include <kabi/*.h> instead of #include <linux/ *.h>, although this would require other changes to the source to handle the __kabi_ prefix. Most of those should be straightforward, however. Since the kabi/*.h headers would not be kernel-version- specific, they could be copied to a system running an older kernel and reused there without problems. Even though some of the syscalls and ioctls referenced in the kabi headers might not be present on the running kernel, portable programs are expected to be able to sanely handle older kernels.Once the kabi headers are available, it would be possible to begin cleaning up many of the glibc headers without worrying about differences between architectures. If all critical constants and datatypes are already defined in <kabi/*.h> with __kabi_ or __KABI_ prefixes, it should be possible to import those definitions into klibc and glibc without much effort.UML has other issues with conflicts between the native kernel headers and the GLIBC-provided stubs. It's been mentioned on the prior threads about this topic that this sort of system would ease most of the issues that UML runs into.I'm working on some sample patches now which I'll try to post in a few days if I get the time.Cheers,Kyle Moffett-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2006/3/24/324 | CC-MAIN-2016-07 | refinedweb | 881 | 61.26 |
The softmax function takes an N-dimensional vector of arbitrary real values and produces another N-dimensional vector with real values in the range (0, 1) that add up to 1.0. It maps :
And the actual per-element formula is:
It's easy to see that is always positive (because of the exponents); moreover, since the numerator appears in the denominator summed up with some other positive numbers, . Therefore, it's in the range (0, 1).
For example, the 3-element vector [1.0, 2.0, 3.0] gets transformed into [0.09, 0.24, 0.67]. The order of elements by relative size is preserved, and they add up to 1.0. Let's tweak this vector slightly into: [1.0, 2.0, 5.0]. We get the output [0.02, 0.05, 0.93], which still preserves these properties. Note that as the last element is farther away from the first two, it's softmax value is dominating the overall slice of size 1.0 in the output. Intuitively, the softmax function is a "soft" version of the maximum function. Instead of just selecting one maximal element, softmax breaks the vector up into parts of a whole (1.0) with the maximal input element getting a proportionally larger chunk, but the other elements getting some of it as well [1].
Probabilistic interpretation
The properties of softmax (all output values in the range (0, 1) and sum up to 1.0) make it suitable for a probabilistic interpretation that's very useful in machine learning. In particular, in multiclass classification tasks, we often want to assign probabilities that our input belongs to one of a set of output classes.
If we have N output classes, we're looking for an N-vector of probabilities that sum up to 1; sounds familiar?
We can interpret softmax as follows:
Where y is the output class numbered . a is any N-vector. The most basic example is multiclass logistic regression, where an input vector x is multiplied by a weight matrix W, and the result of this dot product is fed into a softmax function to produce probabilities. This architecture is explored in detail later in the post.
It turns out that - from a probabilistic point of view - softmax is optimal for maximum-likelihood estimation of the model's parameters. This is beyond the scope of this post, though. See chapter 5 of the "Deep Learning" book for more details.
Some preliminaries from vector calculus
Before diving into computing the derivative of softmax, let's start with some preliminaries from vector calculus.
Softmax is fundamentally a vector function. It takes a vector as input and produces a vector as output; in other words, it has multiple inputs and multiple outputs. Therefore, we cannot just ask for "the derivative of softmax"; We should instead specify:
- Which component (output element) of softmax we're seeking to find the derivative of.
- Since softmax has multiple inputs, with respect to which input element the partial derivative is computed.
If this sounds complicated, don't worry. This is exactly why the notation of vector calculus was developed. What we're looking for is the partial derivatives:
This is the partial derivative of the i-th output w.r.t. the j-th input. A shorter way to write it that we'll be using going forward is: .
Since softmax is a function, the most general derivative we compute for it is the Jacobian matrix:
In ML literature, the term "gradient" is commonly used to stand in for the derivative. Strictly speaking, gradients are only defined for scalar functions (such as loss functions in ML); for vector functions like softmax it's imprecise to talk about a "gradient"; the Jacobian is the fully general derivate of a vector function, but in most places I'll just be saying "derivative".
Derivative of softmax
Let's compute for arbitrary i and j:
We'll be using the quotient rule of derivatives. For :
In our case, we have:
Note that no matter which we compute the derivative of for, the answer will always be . This is not the case for , howewer. The derivative of w.r.t. is only if , because only then has anywhere in it. Otherwise, the derivative is 0.
Going back to our ; we'll start with the case. Then, using the quotient rule we have:
For simplicity stands for . Reordering a bit:
The final formula expresses the derivative in terms of itself - a common trick when functions with exponents are involved.
Similarly, we can do the case:
To summarize:
I like seeing this explicit breakdown by cases, but if anyone is taking more pride in being concise and clever than programmers, it's mathematicians. This is why you'll find various "condensed" formulations of the same equation in the literature. One of the most common ones is using the Kronecker delta function:
To write:
Which is, of course, the same thing. There are a couple of other formulations one sees in the literature:
- Using the matrix formulation of the Jacobian directly to replace with - the identity matrix, whose elements are expressing in matrix form.
- Using "1" as the function name instead of the Kroneker delta, as follows: . Here means the value 1 when and the value 0 otherwise.
The condensed notation comes useful when we want to compute more complex derivatives that depend on the softmax derivative; otherwise we'd have to propagate the condition everywhere.
Computing softmax and numerical stability
A simple way of computing the softmax function on a given vector in Python is:
def softmax(x): """Compute the softmax of vector x.""" exps = np.exp(x) return exps / np.sum(exps)
Let's try it with the sample 3-element vector we've used as an example earlier:
In [146]: softmax([1, 2, 3]) Out[146]: array([ 0.09003057, 0.24472847, 0.66524096])
However, if we run this function with larger numbers (or large negative numbers) we have a problem:
In [148]: softmax([1000, 2000, 3000]) Out[148]: array([ nan, nan, nan])
The numerical range of the floating-point numbers used by Numpy is limited. For float64, the maximal representable number is on the order of . Exponentiation in the softmax function makes it possible to easily overshoot this number, even for fairly modest-sized inputs.
A nice way to avoid this problem is by normalizing the inputs to be not too large or too small, by observing that we can use an arbitrary constant C as follows:
And then pushing the constant into the exponent, we get:
Since C is just an arbitrary constant, we can instead write:
Where D is also an arbitrary constant. This formula is equivalent to the original for any D, so we're free to choose a D that will make our computation better numerically. A good choice is the maximum between all inputs, negated:
This will shift the inputs to a range close to zero, assuming the inputs themselves are not too far from each other. Crucially, it shifts them all to be negative (except the maximal which turns into a zero). Negatives with large exponents "saturate" to zero rather than infinity, so we have a better chance of avoiding NaNs.
def stablesoftmax(x): """Compute the softmax of vector x in a numerically stable way.""" shiftx = x - np.max(x) exps = np.exp(shiftx) return exps / np.sum(exps)
And now:
In [150]: stablesoftmax([1000, 2000, 3000]) Out[150]: array([ 0., 0., 1.])
Note that this is still imperfect, since mathematically softmax would never really produce a zero, but this is much better than NaNs, and since the distance between the inputs is very large it's expected to get a result extremely close to zero anyway.
The softmax layer and its derivative
A common use of softmax appears in machine learning, in particular in logistic regression: the softmax "layer", wherein we apply softmax to the output of a fully-connected layer (matrix multiplication):
In this diagram, we have an input x with N features, and T possible output classes. The weight matrix W is used to transform x into a vector with T elements (called "logits" in ML folklore), and the softmax function is used to "collapse" the logits into a vector of probabilities denoting the probability of x belonging to each one of the T output classes.
How do we compute the derivative of this "softmax layer" (fully-connected matrix multiplication followed by softmax)? Using the chain rule, of course! You'll find any number of derivations of this derivative online, but I want to approach it from first principles, by carefully applying the multivariate chain rule to the Jacobians of the functions involved.
An important point before we get started: you may think that x is a natural variable to compute the derivative for. But it's not. In fact, in machine learning we usually want to find the best weight matrix W, and thus it is W we want to update with every step of gradient descent. Therefore, we'll be computing the derivative of this layer w.r.t. W.
Let's start by rewriting this diagram as a composition of vector functions. First, we have the matrix multiplication, which we denote . It maps , because the input (matrix W) has N times T elements, and the output has T elements.
Next we have the softmax. If we denote the vector of logits as , we have . Overall, we have the function composition:
By applying the multivariate chain rule, the Jacobian of is:
We've computed the Jacobian of earlier in this post; what's remaining is the Jacobian of . Since g is a very simple function, computing its Jacobian is easy; the only complication is dealing with the indices correctly. We have to keep track of which weight each derivative is for. Since , its Jacobian has T rows and NT columns:
In a sense, the weight matrix W is "linearized" to a vector of length NT. If you're familiar with the memory layout of multi-dimensional arrays, it should be easy to understand how it's done. In our case, one simple thing we can do is linearize it in row-major order, where the first row is consecutive, followed by the second row, etc. Mathematically, will get column number in the Jacobian. To populate , let's recall what is:
Therefore:
If we follow the same approach to compute , we'll get the Jacobian matrix:
Looking at it differently, if we split the index of W to i and j, we get:
This goes into row t, column in the Jacobian matrix.
Finally, to compute the full Jacobian of the softmax layer, we just do a dot product between and . Note that , so the Jacobian dimensions work out. Since is TxT and is TxNT, their dot product is TxNT.
In literature you'll see a much shortened derivation of the derivative of the softmax layer. That's fine, since the two functions involved are simple and well known. If we carefully compute a dot product between a row in and a column in :
is mostly zeros, so the end result is simpler. The only k for which is nonzero is when ; then it's equal to . Therefore:
So it's entirely possible to compute the derivative of the softmax layer without actual Jacobian matrix multiplication; and that's good, because matrix multiplication is expensive! The reason we can avoid most computation is that the Jacobian of the fully-connected layer is sparse.
That said, I still felt it's important to show how this derivative comes to life from first principles based on the composition of Jacobians for the functions involved. The advantage of this approach is that it works exactly the same for more complex compositions of functions, where the "closed form" of the derivative for each element is much harder to compute otherwise.
Softmax and cross-entropy loss
We've just seen how the softmax function is used as part of a machine learning network, and how to compute its derivative using the multivariate chain rule. While we're at it, it's worth to take a look at a loss function that's commonly used along with softmax for training a network: cross-entropy.
Cross-entropy has an interesting probabilistic and information-theoretic interpretation, but here I'll just focus on the mechanics. For two discrete probability distributions p and q, the cross-entropy function is defined as:
Where k goes over all the possible values of the random variable the distributions are defined for. Specifically, in our case there are T output classes, so k would go from 1 to T.
If we start from the softmax output P - this is one probability distribution [2]. The other probability distribution is the "correct" classification output, usually denoted by Y. This is a one-hot encoded vector of size T, where all elements except one are 0.0, and one element is 1.0 - this element marks the correct class for the data being classified. Let's rephrase the cross-entropy loss formula for our domain:
k goes over all the output classes. is the probability of the class as predicted by the model. is the "true" probability of the class as provided by the data. Let's mark the sole index where by y. Since for all we have , the cross-entropy formula can be simplified to:
Actually, let's make it a function of just P, treating y as a constant. Moreover, since in our case P is a vector, we can express as the y-th element of P, or :
The Jacobian of xent is a 1xT matrix (a row vector), since the output is a scalar and we have T inputs (the vector P has T elements):
Now recall that P can be expressed as a function of input weights: . So we have another function composition:
And we can, once again, use the multivariate chain rule to find the gradient of xent w.r.t. W:
Let's check that the dimensions of the Jacobian matrices work out. We already computed ; it's TxNT. is 1xT, so the resulting Jacobian is 1xNT, which makes sense because the whole network has one output (the cross-entropy loss - a scalar value) and NT inputs (the weights).
Here again, there's a straightforward way to find a simple formula for , since many elements in the matrix multiplication end up cancelling out. Note that depends only on the y-th element of P. Therefore, only is non-zero in the Jacobian:
And . Going back to the full Jacobian , we multiply by each column of to get each element in the resulting row-vector. Recall that the row vector represents the whole weight matrix W "linearized" in row-major order. We'll index into it with i and j for clarity ( points to element number in the row vector):
Since only the y-th element in is non-zero, we get the following, also substituting the derivative of the softmax layer from earlier in the post:
By our definition, , so we get:
Once again, even though in this case the end result is nice and clean, it didn't necessarily have to be so. The formula for could end up being a fairly involved sum (or sum of sums). The technique of multiplying Jacobian matrices is oblivious to all this, as the computer can do all the sums for us. All we have to do is compute the individial Jacobians, which is usually easier because they are for simpler, non-composed functions. This is the beauty and utility of the multivariate chain rule. | https://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/ | CC-MAIN-2019-22 | refinedweb | 2,612 | 52.6 |
Now
you can find a complete range of import side repeaters and indicators.
Including Toyota parts and rear lighting and indicator sets. Many import Toyota parts
style rear lights also available.
Also View
import
Toyota parts - GTI Black/Blue color
import Toyota parts - 'Turbo' Black/Red
import Toyota parts - 'Turbo' Black/Blue
import Toyota parts - 'Turbo' Black/Yellow
import Toyota.
ALUMINUM AIR INTAKE KITS AND COLD-AIR EXTENSION KITS Increase power and add a performance look Polished 6061 aluminum tubing is mandrel bent to fit your car perfectly The diameter of tubing is custom formed for each application, ensuring the best possible airflow.
Thanks
for visiting and don't forget to keep an eye on our special offers page.
Offers change periodically so check back! | http://www.importspartsdirect.com/import-Toyota-parts.html | crawl-002 | refinedweb | 125 | 54.22 |
Stemming and Lemmatization is very important and basic technique for any Project of Natural Language Processing.
You have noticed that if you type something on google search it will show relevant results not only for the exact expression you typed but also for the other possible forms of the words you use.
You have noticed that if you type something on google search it will show relevant results not only for the exact expression you typed but also for the other possible forms of the words you use.
For example, if you have typed “mobiles” in the search bar, it’s likely you want to see results containing the form of “mobile”.
This is done by finding out the root word of a given word. Here “mobile” is the root word of “mobiles”.
This can be done by two possible methods: stemming and lemmatization.
In this topic I will explain on below topics:
- What is stemming
- How to do Stemming in Python
- What is Lemmatization
- How to do Lemmatization in Python
- Which one is best: lemmatization or stemming?
- Where to use stemming and where to use Lemmatization
What is Stemming
Stemming converts a word into its stem(root form).
Stemming is a rule based approach, it strips inflected words based on common prefixes and suffixes that can be found in an inflected word.
For example: Common suffix like: “es”, “ing”, “pre” etc.
Now if you want to apply stemming on a word “reading”, it will convert it to “read”. Just strip the suffix “ing” from the word which is available in stemming dictionary.
This is also applicable for prefix also.
For Example: “pregame” to “game”
The root form generated by stemming is not necessarily a word by itself, but it can be used to generate words by concatenating the right suffix.
For example: The words study, studies and studying stems into studi, which is not an English word.
The most common algorithm for stemming is Porter’s Algorithm (Porter, 1980). It is only striping suffix of a word.
Stemming in Python NLTK
NLTK provides several famous stemmers interfaces, such as Porter stemmer, Lancaster Stemmer, Snowball Stemmer and etc.
Here I am using Porter Stemmer for Stemming.
Output:
builder
good
better
run
ran
run
from nltk.stem.porter import * import nltk stemmer = PorterStemmer() # For single word print(stemmer.stem('builders')) print(stemmer.stem('good')) print(stemmer.stem('better')) print(stemmer.stem('run')) print(stemmer.stem('ran')) print(stemmer.stem('running'))
Output:
builder
good
better
run
ran
run
# For Sentence sent = 'I have seen this yesterday' tokens = nltk.word_tokenize(sent) print('Word') print(tokens) stemd_word = [stemmer.stem(plural) for plural in tokens] print('Stemmed Form') print(stemd_word)Output:
Word
['I', 'have', 'seen', 'this', 'yesterday']
Stemmed Form
['I', 'have', 'seen', u'thi', 'yesterday']
What is Lemmatization
Lemmatization converts a word into its lemma (root form).
Lemmatization usually refers to doing things properly with the use of a vocabulary and morphological analysis of words. It observes position and Parts of speech of a word before striping anything.
For example consider two lemma's listed below:
For example consider two lemma's listed below:
1. saw [verb] - Past tense of see 2. saw [noun] - Cutting instrument
It normally aims to strip inflection from end of a word.
For word “saw”, stemming might return just “s”, whereas lemmatization would attempt to return either “see” or “saw” depending on whether the use of the token was as a verb or a noun.
Lemmatization in Python NLTK
The NLTK Lemmatization method is based on WordNet’s built-in morphy function.
This lemmatizer removes affixes only if the resulting word is found in lexical resource, wordnet.
This lemmatizer removes affixes only if the resulting word is found in lexical resource, wordnet.
from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() # For single word print(lemmatizer.lemmatize("good", pos='v')) print(lemmatizer.lemmatize("better", pos='a')) print(lemmatizer.lemmatize('run',pos='v')) print(lemmatizer.lemmatize('ran',pos='v')) print(lemmatizer.lemmatize('running',pos='v'))
Output:good
good
run
run
run
# For sentence sent = 'I have seen this yesterday' tokens = nltk.word_tokenize(sent) #Print each tokens print(tokens) lemma_word = [lemmatizer.lemmatize(plural) for plural in tokens] lemma_word # Print lemmatized sentence print(' '.join(lemma_word))Output:
['I', 'have', 'seen', 'this', 'yesterday']
I have seen this yesterday
Which one is best: lemmatization or stemming?
Stemming and Lemmatizing have their own flavour to normalize word.
The difference is that a stemmer operates on a single word without knowledge of the context, and therefore cannot discriminate between words which have different meanings depending on part of speech.
Stemming is much faster than Lemmatizing.
Accuracy of Stemming is much less than Lemmatization.
Where to use stemming and where to
use Lemmatization
It based on your requirement.
If you are handling huge amount of text and you only want to normalize and analyze text not to visualize, then you may go with stemming.
But if you want to visualize your normalized text then you should choose lemmatization as stem words may not necessarily the real world word.
Also you can anti stemming to your stemmed word to get real world word but as per my experience it will take huge amount of time to execute anti stemming task.
Conclusion:
In this topic I have tried to explain
- What is stemming
- What is Lemmatization
- How to do Stemming in Python
- How to do Lemmatization in Python
- How Stemming and Lemmatization works
- Which one could be your choice based on your requirement.
Have Questions?
If you have any question regarding this topic, feel free to comment. I will try my best to answer your questions. | https://www.thinkinfi.com/2018/09/difference-between-stemming-and.html | CC-MAIN-2020-34 | refinedweb | 933 | 54.63 |
MPI_Win_testTest whether an RMA exposure epoch has completed
int MPI_Win_test( MPI_Win win, int *flag );
Parameters
- win
- [in] window object (handle)
- flag
- [out] success flag (logical)
RemarksThis.
- { MPI_WIN_POST(group,0,win)}
- initiate a nonblocking send with tag tag0 to each process in group, using wincomm. No need to wait for the completion of these sends.
- { MPI_WIN_START(group,0,win)}
- initiate a nonblocking receive with tag tag0 from each process in group, using wincomm. An RMA access to a window in target process i is delayed until the receive from i is completed.
- { MPI_WIN_COMPLETE(win)}
- initiate a nonblocking send with tag tag1 to each process in the group of the preceding start call. No need to wait for the completion of these sends.
- { MPI_WIN_WAIT(win)}
- initiate a nonblocking receive with tag tag1 from each process in the group of the preceding post call. Wait for the completion of all receives.
No races can occur in a correct program: each of the sends matches a unique receive,_OTHER
- Other error; use MPI_Error_string to get more information about this error code.
- MPI_ERR_ARG
- Invalid argument. Some argument is invalid and is not identified by a specific error class (e.g., MPI_ERR_RANK).
See AlsoMPI_Win_wait, MPI_Win_post
Example Code
The following sample code illustrates MPI_Win_test.#include "mpi.h"
#include "stdio.h"
/* tests put and get with post/start/complete/test on 2 processes */
#define SIZE1 10
#define SIZE2 20
int main(int argc, char *argv[])
{
int rank, destrank, nprocs, A[SIZE2], B[SIZE2], i;
MPI_Group comm_group, group;
MPI_Win win;
int errs = 0, flag;);
}
MPI_Comm_group(MPI_COMM_WORLD, &comm_group);
if (rank == 0) {
for (i=0; i<SIZE2; i++) A[i] = B[i] = i;
MPI_Win_create(NULL, 0, 1, MPI_INFO_NULL, MPI_COMM_WORLD, &win);
destrank = 1;
MPI_Group_incl(comm_group, 1, &destrank, &group);
MPI_Win_start(group, 0, win);
for (i=0; i<SIZE1; i++)
MPI_Put(A+i, 1, MPI_INT, 1, i, 1, MPI_INT, win);
for (i=0; i<SIZE1; i++)
MPI_Get(B+i, 1, MPI_INT, 1, SIZE1+i, 1, MPI_INT, win);
MPI_Win_complete(win);
for (i=0; i<SIZE1; i++)
if (B[i] != (-4)*(i+SIZE1)) {
printf("Get Error: B[i] is %d, should be %d\n", B[i], (-4)*(i+SIZE1));fflush(stdout);
errs++;
}
}
else { /* rank=1 */
for (i=0; i<SIZE2; i++) B[i] = (-4)*i;
MPI_Win_create(B, SIZE2*sizeof(int), sizeof(int), MPI_INFO_NULL, MPI_COMM_WORLD, &win);
destrank = 0;
MPI_Group_incl(comm_group, 1, &destrank, &group);
MPI_Win_post(group, 0, win);
flag = 0;
while (!flag)
MPI_Win_test(win, &flag);
for (i=0; i<SIZE1; i++) {
if (B[i] != i) {
printf("Put Error: B[i] is %d, should be %d\n", B[i], i);fflush(stdout);
errs++;
}
}
}
MPI_Group_free(&group);
MPI_Group_free(&comm_group);
MPI_Win_free(&win);
MPI_Finalize();
return errs;
} | http://mpi.deino.net/mpi_functions/MPI_Win_test.html | CC-MAIN-2017-47 | refinedweb | 433 | 59.74 |
Del artikkelen
Rapport, 01.02.2000
Publisert under: Regjeringen Bondevik I
Utgiver: Miljøverndepartementet (Opprettet 20.10.2006)
Status:
Arkivert
White Paper no. 8
(1999-2000) The Government’s environmental policy and the state of
the environment in Norway
The Government has launched White
Paper no. 8 (1999-2000) The Government’s environmental policy and
the state of the environment in Norway. This White Paper takes a
look at the whole spectrum of environmental policy. Chapter 7 and
appendix 3 of the report contains a particularly broad discussion
about policy on waste. This is in reply to Parliament’s request for
a new White Paper on waste and recycling.
What you are now looking at is a
shortened version of the section referring to waste in the report.
Waste policy is perhaps the sphere where work on the environment is
most visible and tangible for us all in everyday life. Several
times a day we throw something in the garbage bin and, fortunately
to an increasing degree, we also throw something in the recycling
bin. This means that most people are concerned about waste policy.
For this reason I hope that we will be able to reach a larger
public with this shortened version.
I am really pleased that the
surveys we have made show that the Norwegian public is on the whole
positive to separating at source and support wholeheartedly the
arrangements that have been made. It is important for me that
people see that it’s worth the effort – because it is! Developments
show that we are now in the process of turning the waste stream
away from the landfills. Recycling of waste is increasing from year
to year. In this way the environmental problems which occur when
waste ends up on landfills or in incinerators are being
reduced.
At the same time there is a big
challenge to prevent environmental problems and limit the amount of
waste that occurs. The White Paper is a tool with which to combat
this challenge – the way ahead. I hope that it and this shortened
version will be along in helping each individual consumer, each
individual company in the country and all the country’s local
authorities do their bit in the effort to achieve a successful
policy on waste!
Guro Fjellanger
Minister of the Environment
Everything that we throw away
because we don’t want it anymore, or because we can’t use it
anymore, becomes waste – the leftovers from our production and our
consumption. In 1996 there was a little less than 1.4 million tons
of household waste, over 4 million tons of industrial waste and
about 650 000 tons of special waste. In addition to this there was
about 18 million tons of rubble, stones and gravel which is also
waste but which does not make any significant contribution to the
environmental problems. Waste is one of several sources of today’s
environmental problems. It is when the waste undergoes its final
treatment at the landfill or in the incinerator that the most
important environmental problems arise.
When waste is disposed of on
landfill methane gas is formed. The United Nations climate panel
has rated methane as having a climatic hazard potential that is 21
times greater than CO
2. Methane emission from the dumping of waste
contributes to 7 percent of total Norwegian emissions of climatic
gasses.
Seepage from municipal landfills
has high concentrations of a number of damaging substances amongst
others organic materials, nitrogen, iron, primeval organic salts,
heavy metals and toxic organic combinations. Seepage from about
half the landfill waste leaks out untreated. Pollution from
landfills can continue for many hundreds of years after they have
ceased to be operational. In this way we are pushing the
environmental problems over onto future generations.
The incineration of waste leads to
the atmosphere being polluted by environmentally hazardous
chemicals, dust and acid formation resulting from the incineration
of waste. The most important environmental consequences are
emissions of heavy metals like cadmium, mercury and lead and
poisonous chlorine organic combinations such as dioxides.
Waste facilities take up space,
even after they are closed down, and they lead to obnoxious smells,
noise and risks of transmitting diseases via birds or rats.
Pollution is against the law and
often limits the possibilities we have of using the countryside for
recreation and outdoor activities. Burning waste in small stoves
makes up a very small part of the total incineration of waste. But
the combined emissions from small stoves are for some specific
substances greater than from all the large incineration plants
combined.
These are the environmental
problems that need to be solved by the policy on waste.
Consumer waste:
Normal waste, including larger items such as
fixtures and fittings etc., from households, smaller shops and
offices. The same applies to waste of similar type and quantity
from other businesses.
Production waste:
Waste from industry and services, which in type or
quantity differ significantly from consumer waste.
Special waste:
Waste that cannot be adequately dealt with together
with consumer waste because it may lead to serious pollution or
hazards that are damaging to humans or animals.
Household waste:
Waste from private households.
Industrial waste:
Waste from public and private enterprises and
institutions.
Municipal waste:
All waste that is dealt with by municipal refuse
disposal, i.e. almost all household waste and large amounts of
industrial waste.
Environmental problems arising from waste are extensive,
can be serious and in many cases are transferred to the next
generation. The Government has therefore set out three specific
goals for the policy on waste – national objectives – in order to
show clearly the ambitious target level of the policy and in order
to be able to test whether things are developing in the desired
direction. The Government will report to Parliament every year
about developments in relation to these objectives.
"Growth in the volume of waste that is generated should be
significantly lower than the rate of economic growth."
Economic growth has been
significant in our country especially since the war. We also expect
a significant growth in the future. This growth has up until now
also contributed to an increase in the volume of waste and in this
way increased the environmental problems. The economic growth and
the volume of waste have roughly speaking increased at the same
tempo. In the period from 1974 to 1998 the average quantity of
household waste per person increased from 174 kg to 308 kg per
year, an increase of 77 percent. Figure 1 illustrates the growth in
household waste in relation to the growth of consumption. If the
growth in the volume of waste continues we will inflict increased
harm to the environment. There are practical limits as to how much
of the waste can be recycled and as to how stringent the
requirements made on the waste facilities can be. It is therefore
necessary to force a wedge between future economic growth and the
growth in the volume of waste.
The Government’s target is that the
volume of waste should grow at a significantly slower rate than the
rest of the economy. This means that there should be a
significantly large benefit that accrues over time. Even if the
national objective in the first instance is concerned with breaking
the link between the generation of waste and economical growth, the
Government will work so that the volume of waste that is causing
environmental problems will be reduced in the long term.
"Given the fact that the quantity of waste for final
treatment is to be reduced to a socio-economical and
environmentally reasonable level, the aim is that the quantity of
waste dealt with by final treatment should, by the year 2010, be
equal to approximately 25 percent of the quantity of waste
generated."
When we talk about final treatment
we are referring to disposal and incineration without conversion to
energy. In the case of incineration involving an energy utilisation
lower than 100 percent then it is the portion of waste that
corresponds to the portion of unused energy that is regarded as
finally treated. If, for example, the utilisation of energy in a
plant is 70 percent, then 30 percent of the quantity of waste is
considered to be finally treated, but if the utilisation of energy
is 90 percent then 10 percent of the quantity of waste is
considered finally treated.
The goal is that at least around 75
percent of the waste is to be recycled either by utilising as
materials or as energy from the waste. Some types of waste are only
suitable for material recycling e.g. metals that cannot be
incinerated. Other types of waste like bark and chippings are
unsuitable for material recycling and would therefore most likely
be used for energy utilisation. Some waste will be suitable for
both material recycling and energy utilisation. If socio-economic
assessments show that material recycling can be placed on an equal
footing with energy utilisation, then material recycling will be
preferred.
The objective allows for great
freedom of implementation. There is room for modification for local
situations and varying levels of ambition. How much of the
individual types of waste is to be recycled and how much is to be
material recycled or energy utilised can vary on a national basis
dependent on e.g. local market potential, quality criteria and
prices.
The best municipalities have
already achieved over 70 percent recycling. Nonetheless the costs
involved in waste management in these municipalities are not
significantly higher than the national average. On a national basis
approximately 57 percent of the waste was used as a source of
energy or as raw products in 1996. That is to say that 43 percent
went to final treatment. This amount is to be reduced to 25 percent
within the year 2000.
From 1992 to 1998 the quantity of
household waste for material recycling increased from below 100 000
tons in 1992 to nearly 500 000 tons in 1998. This is an increase in
the course of 6 years from 9 percent to 34 percent. This increase
has been so large that the volume of household waste for final
treatment has actually been reduced over the same period of time.
The amount of industrial waste that was sent to material recycling
rose from 8 percent in 1992 to 20 percent in 1997.
The goal is in line with today’s
development towards increased recycling, see figures 2 and 3 which
illustrate this. This development will become more pronounced by
the measures suggested in the report.
"Practically speaking all special waste is to be dealt with
in a safe and acceptable manner, and is either to be sent to
recycling or is to be guaranteed sufficient national capacity for
its treatment."
Special waste is waste that
contains chemicals that are hazardous to health and the
environment. When wrongfully managed this waste can cause serious
pollution or endanger both humans and the environment. The most
dangerous chemicals are broken down slowly and accumulate in the
food chain.
Special waste makes up 650 000 tons
a year. Of this 340 000 tons are collected and delivered for
domestic treatment. 240 000 tons are treated within the industry,
40 000 tons are exported and approx. 30 000 tons are disposed of in
an unknown manner. It is assumed that a large amount of this waste
ends up in the wrong places and thereby causes serious pollution.
The collection of volumes of special waste in Norway has increased
significantly in the last few years, see figure 4.
The other part of the objective is
based on our international duty to have sufficient national
capacity for the treatment of special waste. When NOAH’s (Norwegian
Waste Mangement) treatment plant for organic special waste in
Brevik is in full operation we will be able to deal with nearly all
special waste ourselves. Nonetheless this will continue to be an
ongoing objective – it is a necessary part of our environmental and
industrial infrastructure.
In the further development of methods and measures to be
taken in the waste sphere, the Government will emphasise
that:
The sphere of waste is, and should
continue to be, regulated through a combination of different
measures and of various central and local regulations. The
Government will stress that central authorities are only to
establish the general framework, so that the local authorities are
free to choose the specific solutions for collection and treatment
locally. When further defining methods in waste policy it will also
be emphasised that many measures have already been established and
are expected to have an increasing effect. New measures are first
and foremost intended to augment and complement those that already
exist.
The Government will:
An important part of the
Government’s waste policy is to prevent waste, see objective no. 1.
Many of the measures that already exist in waste policy, such as
the tax on final treatment and the codes governing manufacturers’
areas of responsibility, already contribute to the reduction of
waste. It is important to develop these further and to ensure that
there is the greatest possible harmony between these different
measures.
In order to succeed in the work
with the reduction of waste it is necessary to increase the
public’s knowledge, commitment and environmental interest.
Increased exposure in the marketplace can also contribute to
companies giving greater emphasis to the work of reducing waste. It
is also important that efforts made to reduce waste are rewarded,
for example that less waste disposed of in the residual waste
container results in less duty on waste. All stages in the life of
a product are significant for the waste that will later arise.
Emphasis should be placed on waste problems from the time when the
raw products are chosen, at the design phase, during production and
at purchase. This is often not the case today. It is therefore
especially important to strengthen the measures in these different
phases.
The introduction of environmental
management systems in industry, the local authorities’ work with
the local Agenda 21, the Green state project and the sector based
environmental management plans are important in order to provoke
increased awareness about the volume of waste and about action
being taken to promote the reduction in waste.
Waste arises because of powerful
forces in society. Norway, like most other OECD countries,
experiences that it is difficult to find the right measures to
promote the reduction of waste. At least short term. It is
therefore important to be aware that this takes time. At the same
time the necessary processes and modifications must start now.
Therefore it is important that industry, local authorities,
consumers and environmental groups work together to gain knowledge
and communicate the necessary experiences. The Government would
therefore like to invite central figures from these groups to
participate in the work to find methods and measures to reduce
waste.
Taxation in the sphere of waste is
an important tool in the work of making the transition from
taxation on income and work (red tax) to taxation on pollution and
use of resources (green tax). Green taxes will be used to put a
price on the environmental consequences of dumping and pollution
from the treatment of waste. This is to safeguard the principle
that the polluter shall pay. In this way it will reward consumers
and industry to choose environmentally sound solutions.
This tax was introduced from 1
st> January 1999 and is a really important tool in
waste policy. The polluter must now pay for the environmental
problems arising from the final treatment. In this way it becomes
relatively speaking financially more rewarding for the individual
to reduce the volume of waste and increase recycling. In this way
prices contribute to controlling the waste for the best
socio-economic solutions. The Government will assess this tax and
evaluate whether to increase and possibly change its form in order
to achieve the best possible effect.
The materials used can be the
source of various environmental problems throughout the course of
their life until they end up as waste. If these environmental
problems are not seen to the pile of materials may become too big.
In connection with certain materials, which create significant
environmental problems, the Government will examine whether a
material duty could be a relevant tool to use.
At present the local authorities
are to set the waste tariffs so that they cover all the costs
involved in managing waste. As the arrangement is at present many
local authorities demand a fixed yearly waste tariff independent of
the quantity of waste that is delivered. There are therefore many
consumers who experience that their expenses are not reduced even
though they reduce the amount of waste.
The local authorities are
encouraged to differentiate the waste tariffs, i.e. fix the tariff
dependent on the quantity or type of waste that is delivered. In
this way the size of the container, the frequency of collection and
payment for the weight of the waste can be used separately or
combined. The Government will work out guidelines for the local
authorities showing the different alternatives that exist. Both
national and international experience in this area will be made
available. If differentiation is not carried out extensively
enough, the Government will evaluate whether to impose this on the
local authorities.
Assessment of environmental
considerations in the purchasing law
Some of the paragraphs in the law
governing purchase can have environmental consequences. The
regulations governing complaints/refunds may for example influence
the purchaser’s chances of getting the product repaired free of
charge. This may in turn influence the manufacturer to extend the
product’s lifetime. In order to utilise the product’s life it ought
to be as simple as possible to repair free of charge for the
customer. The Government will assess whether environmental
considerations can be worked into the regulations governing
purchase in a better way.
Support for environmentally friendly
product designs
Product designers can through their
choice of materials influence such things as repair possibilities
and the life of the product. These are important factors in
achieving the reduction in waste. The Institute of Technology, the
Norwegian Design Council and GRIP Centre for Sustainable Production
and Consumption have together worked out a suggestion for a five
year program to profile environmental design for small and medium
sized companies. The Government will subsidise this work.
Support for environmental management in
smaller companies
The environmental "lighthouse"
concept is an innovative environmental management system for small
and medium sized companies, which fall outside the EMAS and ISO
schemes. The scheme can provide important motivation for many small
and medium sized companies to implement good environmental
measures. The reduction of waste is a central element in the
scheme. The Government will give support to the spreading and
further development of the scheme so that it can spread
nation-wide.
Better standards of products
An extensive work is taking place
to standardise products both nationally and internationally. This
work effects to a large extent the environmental characteristics of
a product for example the possibility for repair, its quality,
lifetime and choice of materials. The Government will encourage
industry to take the environment into consideration when forming
product standards. It will also be evaluated how the authorities
can contribute to this work.
The Government wants to:
Recycling contributes in general to
reducing the burden that waste is on the environment. In this way
environmental costs will also be reduced. However both recycling
and traditional final treatment costs money just like everything
else in society. In order to achieve the best solutions, the
advantages and disadvantages have to be weighed up. The correct
level of recycling will be different for the different types of
waste because business management costs and environmental costs
vary. Many factors decide the correct level of ambition for
recycling. The public’s evaluation of environmental costs change.
The prices of recycled materials, of new raw products and of energy
also change. Technology is in constant development and this will
effect the profitability of the different schemes. All these
situations have an influence on how much waste is
socio-economically viable to recycle either as materials or to
utilise as energy. Increased knowledge and improved statistics can
also provoke the need to modify our course. When we assess what is
the correct level of ambition for the future it is important to
have long-term considerations in mind, which take into account that
profitability in the short term can vary. This means that recycling
solutions may be relatively unprofitable in the short term but
nonetheless be the right thing for society as long-term
profitability may be good. In assessing how much waste should be
recycled, fluctuations in profitability must be weighed up against
more lasting changes which make it right to modify the level of
ambition. The Government uses socio-economic evaluations as a basis
for the shaping of recycling schemes.
A lot of waste is not suitable for
material recycling. Some of this is organic waste like timber,
bark, chippings, bits of paper/cardboard, plastic and textiles. If
this waste ends up on a landfill it will lead to emissions of the
climatic gas, methane. It is therefore desirable to keep the
organic waste away from the landfills. The organic waste can have a
high level of energy potential which can be utilised in different
ways. Some of the waste can be made into pellets and briquettes,
which can be used, for heating. Methane gas, which is formed when
waste is broken down, can be used as fuel for vehicles. Energy from
incineration can be used as process heat in industry or in other
systems for distant heating and proximity heating and for the
production of electricity.
The utilisation of waste for energy
purposes can replace other sources of energy and will often lead to
a reduction in the usage of fossil fuels like oil. In this way we
achieve a two-fold effect on the emissions of climatic gas. As part
of our efforts to promote renewable sources of energy the
Government wants to stimulate the increase of the utilisation of
energy from this waste.
From waste to fuel for busses and cars
In the municipality of Uppsala in
Sweden gas from waste is used as fuel for busses and cars. A biogas
facility, which started operation in the autumn of 1996, today
provides biogas (methane) for about 20 busses and 15 cars. The
facility receives wet organic waste from slaughterhouses,
restaurants, catering establishments and retail stores. They also
accept manure from cattle and pigs. The facility will be enlarged
to receive food waste from households. The establishment of this
facility has resulted in waste now being used as fuel. In this way
emissions from waste on the landfills are avoided and emissions
into the atmosphere from fossil fuels are reduced. Experience with
the operation of busses and cars run on gas is extremely positive
and the bus companies have shown great interest in expanding this
venture.
A lot of unutilised energy in
waste
Today around 1.8 million tons of
waste is incinerated in about 650 waste to energy facilities. About
600 of these facilities are smaller energy plants that burn waste
timber or other unsorted waste, which doesn’t require advanced
processing prior to incineration. Five large municipal waste
incineration facilities burn mixed waste. Altogether about 7
TWh/year energy is generated from the incineration of waste. This
corresponds to the electricity consumption of approx. 300 000
households.
The degree of energy utilisation of
these five large waste incineration facilities has increased
consistently over the last ten years. In 1998 the degree of
utilisation was about 70 percent.
It is estimated that the potential
for increased energy utilisation from waste is approx. 3,5 TWh a
year which corresponds to the electricity consumption of about 150
000 households. The estimate is uncertain and neither is it certain
that it is socio-economically profitable to utilise the entire
potential.
Better economic conditions for the sale
of waste based energy
A scheme supported by investment
has been started to create suitable conditions for increased usage
of renewable sources of energy such as bioenergy, waste and
waterborne heat based on these sources of energy. 75 million kroner
was set aside for 1999 for this scheme in the national budget. The
Government Environmental Fund can also give favourable loans for
the establishment of biofuel facilities. In order to provoke an
even greater increase in the usage of renewable sources of energy
and waterborne heat, the Government is preparing an extensive
development program ("the energy packet"). This includes an
increase in the electricity tax combined with subsidies for
investments of up to 5 billion kroner over a ten-year period. These
measures should also lead to better market potential for waste
based energy.
Local co-operation is important
It is important to build up local
markets that function satisfactorily. The local authorities have
planning authority with responsibility for area planning, and
therefore they can prepare the way for increased use of waste based
energy. In many cases it is advantageous that the local
authorities, the sanitation department, the energy department and
industry work together on projects.
It is important for local
authorities choosing to invest in facilities for waste incineration
to build the plant with the right dimensions in relation to the
potential for the reduction and material recycling of waste.
Flexible solutions ought to be sought. New facilities ought to be
built with a large degree of energy utilisation in mind.
Reduced emission from waste
incineration
At the same time as it is desirable
to increase the generation of energy from waste it is also
important to continue to work to reduce environmentally damaging
emissions from facilities that incinerate waste. There is a need to
increase knowledge about emissions from the incineration of various
types of waste in the different incineration facilities. It is also
important that the waste is separated and that the quality is
ensured prior to incineration. The Government will instigate the
establishment of good schemes for ensuring the quality of waste
based fuels.
Construction and demolition waste
There are large amounts of waste
that arise from construction and demolition sites. This waste is
very composite and contains considerable amounts of chemicals
hazardous to health and the environment. As of today only a small
amount of construction and demolition waste goes to reusage,
material recycling or energy utilisation. There is a huge potential
to increase this and thereby reduce the amount that goes to
landfills.
The tax on final treatment is an
important means of reaching this goal. The building trade has
already started an extensive work to promote the reduction and
increased recycling of waste. They have for example established a
5-year trade development program, ØkoBygg (EcoBuild), which aims to
reduce the volume of waste going to landfills by 70 percent. With
support from the ØkoBygg program the building trade has started
drawing up a national plan for the management of construction and
demolition waste, see inset. The Government will support the
ØkoBygg program through subsidies.
National plan for the management of construction and
demolition waste
Based on the desire to change the
present trend of construction and demolition waste, the National
Union for the Building Trade and the National Union for Technical
Contractors have started drawing up a national plan for the
management of construction and demolition waste.
The management plan will consist of
specific goals for the reduction and recycling of waste, as well as
measures to attain these goals. A viable economic and environmental
management of construction and demolition waste requires
co-operation, co-ordination, preparation, motivation and practical
schemes and will involve large sectors of the industry. The
management plan will concentrate on how this can be best
accomplished in such a large and complex sector. The plan is to be
presented in the summer of year 2000.
The local authorities are given
increased powers
Several local authorities have had
a lot of trouble with the illegal disposal of construction and
demolition waste. This means that the polluter gets out of having
to pay taxes and tariffs for the waste. Some municipalities e.g.
Oslo, have as part of a trial project received the power to demand
information and reports about how the waste from building sites is
managed. This has led to these local authorities being able to
prevent illegal disposal and promote recycling of this waste (see
inset about the example from Oslo municipality). The Government
will now give all local authorities the opportunity to introduce
such schemes. It will be up to the local authorities as to whether
they want to avail themselves of this possibility. The Government
will assess at the next revision of the planning and building law
whether they will make it mandatory for all builders to provide
information about management of their waste.
More recycling of wet organic waste
Wet organic waste, i.e. waste from
the food industry, food waste from large-scale catering and private
households, as well as garden waste, is one of the most polluting
types of waste. The disposal of wet organic waste leads to
emissions of methane gas and emissions of environmental toxins into
the earth and water through seepage water. Therefore on the whole
it will not be permitted to deposit this waste on landfills after
the year 2000.
The waste contains important
nutrients that it is important to utilise better than we do today.
There is a huge potential for increased recycling of wet organic
waste both as fodder and as fertiliser or a means of improving the
land and thereby returning it into nature’s cycle. One condition
for this is that the products are of a quality that is compatible
with the quality production of food. Different requirements are
laid down in the law for products that are involved in the
manufacturing of food and for products that are to be used on green
areas, roadsides etc.
The Government establishes collaboration
projects
It is necessary to develop markets
for the sale of waste based fodder and compost products. In order
to do this, products supplied must be able to show well-documented
results, have high utilitarian value and a large degree of
confidence in the market. Strengthening of competence and
confidence is vital. There is also a need for a change in attitude,
product development and a better dialogue between the manufacturer
and the consumer. In order to meet these challenges the Government
is taking the initiative by establishing a five-year project in
close collaboration with the participants within the waste and
agriculture sectors.
Construction and demolition waste in Oslo
municipality
In Oslo the illegal dumping of
construction and demolition waste has created serious problems. In
order to put an end to this, in 1994 Oslo municipality passed a new
by-law concerning the management of production waste after the
Ministry of the Environment had delegated them authority according
to the pollution law.
The building authorities can in
matters of building, rehabilitation and demolition demand that the
builder must provide a summary of the volume of waste that will
arise in connection with the project, and submit a plan of how the
waste will be disposed of. When the work is completed the builder
has to submit a final report documenting that the waste has been
managed in accordance with the previously approved plan. In this
way the builder is encouraged to plan the disposal of his waste and
it will also be easier for the authorities to control that the
waste has been managed in a responsible manner.
Experience from Oslo so far shows
that the new by-law has given the municipality a clearer picture of
the volume of waste involved and better control over the waste
streams. Competence amongst the builders and transporters has
increased and the amount of separating at source has also
increased.
The licensing requirements for
landfills and incineration plants for waste have been increased
significantly in the last few years. This has led to reduced
emissions. However the Government still feels it is necessary to
tighten up the requirements even more. EU has now suggested new
rules, which Norway supports both concerning the incineration and
disposal of waste. The directives will lead to reduced emissions
and joint standards for the management of waste. In this way there
will be less danger of importing and exporting waste in order to be
able to get cheaper treatment with lower environmental
standards.
Dumping and uncontrolled waste
incineration is forbidden. Nonetheless this illegal management of
waste goes on to a certain extent; burning in the back garden and
dumping in ditches and woods.
It is also important that the local
authorities use their power to stop illegal dumping. In addition
the Government will give the local authorities power to regulate
and implement measures against illegal incineration of waste and
draw up directives and guidelines for the local authorities’ work
in this area.
As of today the municipality’s
authority for the supervision of illegal waste management only
applies after the transgression has taken place. The Government
will evaluate whether the local authorities also ought to have the
right to demand documentation from the various participants
beforehand to show that they have sufficient schemes for their
waste. This can possibly be carried out through delegation or a
change in the law.
Special waste not accounted for
Chemical policy has the goal of
reducing emissions that are hazardous to health and the
environment. As measures are put into practice the amount of
dangerous substances in products will be reduced and thereby there
will also be a reduction of these substances in waste. At the same
time we know that the use of chemicals has increased in the last
few years. This results in new product groups appearing as special
waste. The Government will make it a priority to ensure that waste
containing hazardous components is classified as special waste so
that it is guaranteed special treatment.
Every year there is about 30 000
tons of special waste which is unaccounted for. Some of this is
quietly looked after in a responsible manner but the rest can cause
serious pollution if it becomes mixed with other municipal waste,
poured down drains and into ditches or other illegal disposal. A
whole range of measures is necessary in order to ensure the
responsible collection of this waste.
The local authorities have since
1996 been obliged to have sufficient facilities to receive special
waste from smaller generators of waste, but the success of these
measures has been varied. Guidelines have been drawn up for the
local authorities showing how they ought to meet the requirements
for "the existence of sufficient facilities". If the local
authorities do not establish satisfactory schemes the Government
will introduce minimum requirements for the collection of special
waste in the municipalities.
In addition the Government will:
The national management capacity
Norway through the Basle convention
is obliged to limit the transport of special waste abroad to a
minimum. It has therefore been a longtime goal to establish
sufficient national capacity for the managing of special waste in
Norway. In 1991 the government and industry worked together to
solve this problem and established the company Norsk
avfallshandtering AS (NOAH) (Norwegian Waste Management AS). This
objective was attained with the establishment of NOAH’s plant for
the management of organic special waste in Brevik in 1999.
Sufficient capacity for the management of special waste will also
be taken care of in the future by a licensing practice that is open
to competition in the market for the management of special waste.
At the same time it is important to ensure that at all times the
management of this waste is carried out in a safe and responsible
manner. NOAH will therefore continue to play an important part in
the system for special waste through its obligation to receive and
select appropriate treatment for all types of special waste.
The Government wants:
The Government is assessing firstly
extending industry’s freedom of action concerning the management of
their own waste and secondly extending the responsibility of
manufacturers for special products.
Increased freedom of action for industry
Some industries are today
encompassed by compulsory council refuse schemes. By giving these
industries the possibility of choosing who collects and disposes of
their waste, more flexible solutions can be promoted and lead to a
reduction in waste and an increase in recycling. In the light of
this there is a suggestion for a change in the law. The suggestion
is that the local authorities shall only have the right to and be
obliged to take care of waste from households, whilst the
management of industrial waste will be a matter of supply and
demand, see figure 5. The local authorities will be able to compete
in the same way as other participants in the market for the
management of this waste. The suggestion has been sent out for
Parliament, the parties in question will be contacted so that all
points of view can be considered.
Increased responsibility for the manufacturers
The Government is evaluating
special treatment for specific types of waste where the general
measures are insufficient. A good solution may be increased
responsibility for the manufacturers which means the manufacturers
and importers would bear the waste costs for their products. This
would contribute to increased recycling, stimulate reduction in
waste and reduced usage of substances in products which are
hazardous to health and the environment.
The Ministry of the Environment has
responsibility for many measures in waste policy, but other
departments also influence the sphere of waste through the methods
used in their sectors. The sector authorities are to have a clear
picture of how the businesses in their sector effect the
environment, set goals and develop measures within their area of
responsibility. The individual sectors are to work out
environmental management plans for each sector involving measures
and methods to contribute to the achieving of the goals in waste
policy.
The Government also has
responsibility for waste arising from its own enterprises. State
enterprises are important both because of their size and because
they give out signals to other businesses in society. It is of
great significance that the state itself behaves in the way that it
wants others to behave. The Government therefore introduced the
project Green State in 1998. The aim of the Green State project is
to reduce the burden on the environment caused by the operation of
state enterprises and form a basis for assessing how the
integration of environmental considerations in the state can best
take place. Reduced amounts of waste and increased separating at
source are amongst the priorities that the enterprises will be
working on, as well as the establishment of an environmentally
conscious purchasing strategy.
In the waste sphere the local
authorities have many possibilities and considerable responsibility
to create solutions which make it easy and profitable for companies
and households to choose what is environmentally correct. Many
local authorities have begun this work by finding good and creative
solutions for waste. Local authorities, local industry,
organisations and inhabitants can work together to create a
stronger local community. In the inset about concentrating on waste
in the Flora municipality we can see that some local authorities
have become forerunners in this work, see inset.
A Competence Network for local
Agenda 21 has been set up. The Competence Network is to distribute
information about relevant measures that local authorities can
implement in order to be more environmentally friendly, such as
measures to reduce the volume of waste and the amount of health and
environmentally hazardous chemicals in the waste, as well as
measures to promote reuse and recycling.
Concentrating on waste in the Flora municipality
By using simple, educational
worksheets Flora council has written "recipes" specially adapted
for different enterprises and groups for things such as the
minimising of waste. Hospitals, schools, playschools, shops and
hotels have received tailor made check lists with specific tips on
how they can attack the waste problem. Industry increasingly
understands that its awareness of the environment is a competitive
advantage. The council has also contributed by organising an
infrastructure so that households are also able to separate at
source in the home without having to transport glass, paper and
other types of waste to various recycling stations.
Possible measures in the municipalities in order to promote
the reduction of waste and increased recycling of waste
– information to inhabitants to
build up understanding and motivation
–examination of the council’s own
operations (choice of products, routines, own separating at source,
reuse, repair etc.)
– active use of a waste plan as a
starting point for co-operation between the public and local
industry
– establishment of reusage
workshops
Information is a prerequisite for
increased support for reducing the volume of waste and using
recycling schemes. There are many participants who provide
information in the waste sphere such as return companies and the
local authorities. Co-ordination of information can give profits
and the Government encourages this. At the same time the protection
of the environment authorities will assess how they can contribute
to this work.
Surveys show that people are on the
whole positive to separating their own waste and are good at using
the schemes available in their own municipalities. Eighty percent
of those asked are of the opinion that there are huge environmental
gains to be made through separating at source. At the same time
thirty percent say that they do not have any confidence that waste
is being properly managed. It is therefore important to provide
information so people can see the results of separating and
recycling and be made to feel that it’s worth the effort.
The Government intends to design a
system for presenting the volume of household waste for final
treatment, spread over the municipalities in the country.
The Government wants to continue to
concentrate on the Norwegian Resource Centre for Waste Management
and Recycling (Norsas) as the national competence and information
centre for waste and recycling. On the net Miljøstatus i Norge
(Environmental status in Norway) will be used as an active channel.
The Network for Environmental Education can provide information for
pupils. The environment authorities will also continue their
co-operation with the voluntary organisations. These often have
widespread contacts and can effectively spread their gospel.
International work is important for
solving problems involved in the transport of waste over borders
and for solving global and regional environmental problems caused
by waste. Emissions of environmental toxins and climatic gasses are
not confined by borders. International co-operation will also
ensure a co-ordination of the methods that are used and thereby
contribute to avoiding undesirable competition between countries.
At the same time co-operation gives access to other countries’
experiences and competence. The international work on waste goes on
in a whole chain of forums and organisations.
– The Nordic Council
– The European Economic Area
(EEA-agreement)/European Union (EU)
– The Organisation for Economic
Co-operation and Development (OECD)
– The Basle Convention
– UN’s international shipping
organisation (IMO).
The statistical basis of the waste
sphere is still incomplete. Statistics and knowledge about the
volume of waste, emissions and methods of treatment is important
information that the authorities need for the assessing and forming
of new measures. The quality of waste statistics must be
improved.
There is also a need for more
knowledge about the connection between the development of society,
the use of different measures and the generating of waste. In
addition to this there is a need for more knowledge about a whole
chain of technical situations e.g. in connection with technology
for the extraction of methane gas and several situations concerned
with incineration and the utilisation of energy.
The White Paper deals first and
foremost with how future policy in the waste sphere will be. This
therefore does not cover all the existing methods. In the course of
the last few years a series of methods have been introduced and
measures have been taken to improve the collection and treatment of
waste, both in the local authorities and in industry. The effect of
these measures has been good. It will nonetheless take time before
the total effect is apparent. Technical adjustments that are
required in order to implement certain measures can be time
consuming and it also takes time to alter the routines and habits
of local authorities, industry and people in general. The
suggestions in the report are in addition to the existing schemes.
In order to show the overall picture of waste policy a short
summary is given below of the most important present day methods
that are used in the waste sphere.
Some methods are general and cut
across environmental problems and types of waste while other
methods are aimed directly at specific types of waste or
specialised forms of treatment.
The law states that it is illegal
to pollute, but in certain circumstances exceptions can be made.
The law also forbids dumping and states that the local authorities
can order the "dumper" to clean up. The law for example aims to
reduce the amount of waste and states in its guidelines that the
costs of waste management ought to be borne by those who are
responsible for the waste. Facilities for the treatment of waste
must get permission from the pollution authorities.
The local authorities have
responsibility for collecting and managing consumer waste. It is
the local authorities that regulate how waste collection and
separating at source is to take place locally. They are to work out
waste plans and they set waste tariffs. The tariffs are intended to
cover all the costs of collecting and managing the waste. Many
local authorities comply with the request to differentiate the
tariffs, i.e. have a system where the price depends on the amount
and type of waste that is delivered. In this way it will pay to
deliver less waste and do more separating.
Industry is responsible for whether
the manufacturing waste that arises in a company is recycled or
delivered for safe treatment. In addition industry is given
increasing responsibility for its own products when they end up as
waste.
The duty is to cover the
environmental costs of the final treatment of waste. These are
costs that have not previously been specified. For landfill the
duty is 300 kroner per ton. For incineration there is a basic
charge of 75 kroner per ton waste and an additional charge (225
kroner per ton) which is reduced according to the degree of energy
utilisation. The charges make the final treatment of waste
significantly more expensive and therefore promote the reduction of
waste, increased material recycling and energy utilisation of the
waste.
Special waste has a special hazard
and pollution potential. Therefore this waste is regulated through
a special statute which for example stipulates that everyone who
manages special waste needs permission from the authorities. All
businesses where special waste arises should deliver it at least
once a year to approved facilities. The local authorities are
obliged to ensure that sufficient facilities exist for the
reception of special waste from households and smaller
enterprises.
Paper makes up about 17 percent of
all household and industrial waste. In 1990 three times more paper
was deposited than material recycled, whilst in 1997 the amount of
paper that was material recycled was greater than the amount that
was deposited, see figure 6. The Ministry of the Environment has
made an agreement with Norske Skogindustrier ASA (Norwegian Timber
Industries ASA) where they commit themselves to build a plant for
the recycling of return paper. This plant will be completed in the
first half of 2000.
Each year there is about 144 000
tons of waste from electric and electronic products (EE waste).
Some of this waste is special waste. As the first country in the
world Norway has stipulated statutes that ensure the collection and
managing of this waste. The statute that came into power on 1
st> July 1999 enables consumers to deliver their
waste to dealers that sell similar products and to the local
authorities. Importers and manufacturers of these products have,
for example, responsibility to look after the collection and safe
management of these products. In accordance with an agreement
between the Ministry of the Environment and the EE industry, the
industry is to establish a nation-wide system that will ensure that
80 percent of all EE waste that arises is collected within 5
years.
The wreck deposit scheme for car
wrecks was established in 1978 for vehicles of under 3,5 tons. When
purchasing a new car a charge of 1 200 kroner is paid, (suggested
increase to 1 300 kroner in the national budget for 2000). When the
discarded car is delivered to an approved reception facility, the
car owner is given a deposit of 1 500 kroner. This ensures that
about 90 percent of all car wrecks become part of the return
system. About 75 percent of the car wrecks (measured in weight) are
recycled.
Packaging for drinks are regulated
by a tariff system. The tariff is reduced according to how much of
the packaging is returned for reuse or recycling. In addition a
charge accrues per unit of disposable packaging.
Other packaging is regulated
through agreements entered into in 1995 between the Ministry of the
Environment and the packaging industry. The industry is obliged to
work to reduce the amount of packaging waste which arises and to
attain within 60-80 percent recycling within the course of 1999. In
accordance with the requirements in the agreements special return
systems have been established for the different types of packaging.
In total nearly 57 percent of packaging waste was material recycled
in 1998 with an additional 11 percent in energy utilisation.
KFK-gasses break down the ozone
layer when they escape into the atmosphere. This substance was
previously used in refrigerating equipment. Safe waste treatment of
this is guaranteed through a separate statute from 1996. According
to the statute the dealers must be willing to receive the
refrigerating equipment. They can then deliver the equipment free
of charge to the local authorities who are obliged to provide
sufficient facilities for its reception and to ensure safe
treatment of the refrigerating equipment so that emissions of KFK
gasses are hindered.
The battery statute stipulates that
dealers must be willing to receive discarded lead batteries free of
charge. Importers of lead batteries are obliged to organise free
collection from the dealers and local authority facilities and to
ensure at least 95 percent recycling. The obligations in the
statute are further guaranteed through an agreement between the
Ministry for the Environment and AS Batteriretur who have committed
themselves to financing and organising a nation-wide system for the
collection and recycling of at least 95 percent of all types of
used lead batteries. Since this scheme was started in 1994 the
collection of batteries has been almost 100 percent.
The tyre statute of 1994 stipulates
a prohibition against the disposal of discarded tyres and gives the
tyre industry responsibility for ensuring the safe collection and
recycling of tyres. Consumers have the right to deliver their
discarded tyres free of charge to tyre dealers, while tyre
manufacturers and importers are obliged to fetch the collected
tyres and ensure the recycling of the same. The statute is
supplemented by an agreement with the industry where they commit
themselves to establishing a nation-wide collection system for
waste tyres. In 1998 86 percent of tyres were collected for
recycling.
Used lubricating oil is called
waste oil and is special waste. Lubricating oil from a number of
different usage areas is taxed. At the same time reimbursement is
given when waste oil that comes from taxable lubricating oil is
delivered. This scheme was introduced in 1994 and has led to the
collection of waste oil increasing from about 54 percent in 1990 to
about 73 percent in 1998. See figure 9. In order to increase the
collection even more it has been suggested to extend the
reimbursement system to give reimbursement for all waste oil,
independent of whether it comes from taxable lubricating oil or not
with the exception of waste oil from ships engaged in foreign
trade.
In addition there are special
statutes regulating individual specialised forms of treatment, for
example the incineration of municipal waste, the incineration of
hazardous waste and the incineration of waste oil. A special
statute has also been stipulated which regulates the export and
import of waste as well as a statute about the registration of
waste management which ensures a central nation-wide waste
Publisher: The Ministry of the
Environment
This publication can be ordered
from
Statens forurensingstilsyn (SFT)
(The Norwegian Pollution Control Authority)
Strømsveien 96
Postbox 8100 Dep., 0032 Oslo
Telephone: +47 22 57 34 00 Telefax:
+47 22 67 67 06
Design and layout: Gazette
Illustrations: Trude Tjensvold
Translation: Follo Språkservice
Print: Optimal a.s.
Copies: 3 000/February 2000
This publication is printed on
recycled soft drink cartons.
T 1312 ISBN 82-457-0272-2
Departementenes sikkerhets- og serviceorganisasjon, Postboks 8129 Dep, 0032 OSLO | Org.nr. 974 761 424Tlf: 22 24 90 90 | E-post: | http://www.regjeringen.no/nb/dokumentarkiv/Regjeringen-Bondevik-I/ud/Rapporter-og-planer/2000/T-1312-Waste-policy.html?id=420075 | CC-MAIN-2014-10 | refinedweb | 8,873 | 50.87 |
Hello everyone, this is my first article here. I have gained a lot from The Code Project and it is my time to give something here. What is this article about? It is just about a simple control on Excel so that the position of the arrow in the dashboard will change whenever the value of a cell changes. That means, the control is linked with the cell in Excel. From the figure, you can see that whenever the slider changes its position, the arrow will change its position too. It is because the cell value "A1" is changed by the slider, and so, because my control is linked to the cell, the position of the arrow in my control is changing too. So, don't be confused that my control is linked with the slider. It is indeed linked with the cell "A1". It is not a great article or application. After reading this article, I hope that you will learn the following few things:
Why there is such an application? I have been a programmer since my university time, and been a professional C++ programmer after graduation. One day, my lovely boss asked me to create a dashboard control in Excel so that users of Excel will always be notified by such a control of how their cell values are changing. Useful or not? For me, I don't think it is useful, but it was a challenge for me. However, for some business, like some resource planning software, data is important, and if there are over hundreds of values, a manager can't just use his eyes to clarify which resources are under critical condition. However, with the help of dashboard, they can be notified easily and they can make decisions right away. This application is just the first step. After I created the application, I started to create a traffic light which needed multi-threading technology to flash users for their greater attention. I will not post the code of the traffic light here, let's see how the reader response to what is here first . That is all why there is such a control. I know nothing about ActiveX, Excel programming, COM... etc. I can only find very little resources talking about this on the Internet. So, from the learning phase to the design phase, to the implementation phase, all was stiff work to me and gave me a very hard time. I hope everyone reading this article can be benefited.
In order to uninstall the control, you have to type "regsvr32 /u ActiveXArrow.ocx" in the command prompt described in step 2..
Before talking about the code, I would like to introduce the structure of the whole control first. When you open the project using VC++ 6.0, you will find that there are a lot of classes. For those who are new to ActiveX, it may look strange. In fact, there are only three main classes that implement the main features of the controls. They are "CActiveXArrowCtrl", "CArrowObj", and "CPieForm".
CActiveXArrowCtrl
CArrowObj
CPieForm
CActiveXArrowCtrl is the main class to handle the drawing of the control. You can find there is a member class function:
void CActiveXArrowCtrl::Draw(CDC* pdc, const CRect& rcBounds, CRect* rcClip)
and inside the function, the following three lines of code handle the most complicated drawing of the arrow:
m_pieFormObj.SetGraphic(g);
m_pieFormObj.DrawPie(rcBounds, FALSE, TRUE);
m_pieFormObj.DrawArrow(m_angle, TRUE);
Of course, it is not just that simple. GDI+ will draw the image in EMF format, but that is not compatible with the printing structure of Excel (as Excel can only recognize image in WMF format). So, we have to find a way to convert the GDI+ image to WMF format. The full code of Draw function can be found below:
Draw
// pdc is the device context of the drawing area, that is,
// what you drag on the excel worksheet
// rcBounds is the rectangle of the drawing area,
// with (0,0) at the top left corner
void CActiveXArrowCtrl::Draw(CDC* pdc, const CRect& rcBounds, CRect* rcClip)
{
// Rect is a GDI+ object
Rect oRect(rcBounds.left, rcBounds.top, rcBounds.right, rcBounds.bottom);
TCHAR lpBuffer[256];
DWORD len = ::GetTempPath(256, lpBuffer);
lpBuffer[len]= '\0';
CString stemp;
stemp.Format(_T("%s"), lpBuffer);
// create the emf file name
CString path = stemp + _T("h") + m_myUID + _T("e.emf");
// create the emf object using the filename
Metafile* myMeta = new Metafile(path, pdc->m_hDC);
{
// create the gdi+ graphic object and draw the image
// on the emf object created just before
Graphics* g = new Graphics(myMeta);
g->SetSmoothingMode(SmoothingModeAntiAlias);
// draw the image
// if the m_BkImage have path exist
{
if(m_BkImage != _T(""))
{
// create the background image from the specified image path
// (m_BkImage store the path of the background image)
Image* img = new Image(m_BkImage.GetBuffer(m_BkImage.GetLength()));
Status st;
st = g->DrawImage(img, oRect);
if(st != Ok)
{
// if fail to create the background img, try to create the background
// using the resources file
Bitmap* img2 = Bitmap::FromResource(AfxGetApp()->m_hInstance,
MAKEINTRESOURCE(IDB_BITMAP_BK));
g->DrawImage(img2, oRect);
delete img2;
}
delete img;
}
// if there is no path exist, just create
// the image from the resources file
else
{
Bitmap* img = Bitmap::FromResource(AfxGetApp()->m_hInstance,
MAKEINTRESOURCE(IDB_BITMAP_BK));
if(!img)
AfxMessageBox(_T("fail to load bitmap"));
g->DrawImage( img, oRect);
}
// succeed to draw the background,
// now, is the time to draw the arrow ...
m_pieFormObj.SetGraphic(g);
m_pieFormObj.DrawPie(rcBounds, FALSE, TRUE);
m_pieFormObj.DrawArrow(m_angle, TRUE);
}
delete g;
}
delete myMeta;
// OK, now, we succeed to draw all the things, however,
// all are in emf format and stored in the file "path"
// we have to load it using GDI method and so, it will be
// in wmf format and excel can print it out ~
// create the Bitmap object from the path
Bitmap mybitmap(path.GetBuffer(path.GetLength()));
// get the bitmap handle
HBITMAP hbm = NULL;
mybitmap.GetHBITMAP(NULL, &hbm);
if(!hbm)
{
// AfxMessageBox(_T("fail to get hbm"));
// if fail to get the handle, mean there is no such file,
// just load a default image from resources
Bitmap* img = Bitmap::FromResource(AfxGetApp()->m_hInstance,
MAKEINTRESOURCE(IDB_BITMAP_BK));
if(!img)
AfxMessageBox(_T("fail to load bitmap"));
// Rect rect2(0, 0, rcBounds.BottomRight().x, rcBounds.BottomRight().y);
// g.DrawImage(
// img,
// rect2);
img->GetHBITMAP(NULL, &hbm);
}
// create a DC, but don't create it in any device context
// but system display
CDC memDC;
memDC.CreateCompatibleDC( NULL );
// re-draw it ... all are straight forward ..
//memDC.SelectObject( &bitmap );
HBITMAP hBmOld = (HBITMAP)::SelectObject( memDC.m_hDC, hbm );
// Get logical coordinates
BITMAP bm;
::GetObject( hbm, sizeof( bm ), &bm );
if(!rcClip)
pdc->StretchBlt(rcBounds.left, rcBounds.top,
rcBounds.Width(), rcBounds.Height(),
&memDC,
0, 0, bm.bmWidth, bm.bmHeight, SRCCOPY);
else
{
pdc->SetStretchBltMode(STRETCH_DELETESCANS);
pdc->StretchBlt(rcClip->left, rcClip->top,
rcClip->Width(), rcClip->Height(),
&memDC, rcClip->left, rcClip->top,
rcClip->Width(), rcClip->Height(), SRCCOPY);
}
::SelectObject( memDC.m_hDC, hBmOld );
::DeleteObject(hbm);
::DeleteObject(hBmOld);
::DeleteObject(memDC);
}
When I first designed the projects, I was thinking which graphic library I should use. In using MFC, DirectX, OpenGL, GDI, GDI+ can be employed. Finally, I chose to use GDI+ as what I wanted to show to the user at most just a transparent arrow. In using GDI+, I am benefited by one of the CodeProject contributors (Author: Ryan Johnston, see article). He does all the trouble work in initializing the GDI+ library for us. Thanks a lot. In initializing the GDI+ library, what we have to do is just create a member variable using his class and call just a few lines of code to do the initialization, that is, see below:
// ... in stdafx.h, declare sth below
#include <gdiplus.h>
#pragma comment(lib, "gdiplus.lib")
using namespace Gdiplus;
// In class declaration
#include "GDIpInitializer.h"
class CActiveXArrowCtrl : public COleControl
{
public:
CGDIpInitializer m_gdip;
...
};
// In class definition
CActiveXArrowCtrl::CActiveXArrowCtrl()
{
...
m_gdip.Initialize();
...
}
CActiveXArrowCtrl::~CActiveXArrowCtrl()
{
m_gdip.Deinitialize();
}
// then, you can see that I declared
// a graphic object at the ::Draw function
void CActiveXArrowCtrl::Draw(CDC* pdc,
const CRect& rcBounds, CRect* rcClip)
{
...
Graphics* g = new Graphics(myMeta);
// this is the gdi+ graphic library
...
}
It is simple, I am benefited from a book called "ActiveX Inside Out" (something like that, I forgot the exact name). It is a very good book. For anyone who want to learn ActiveX, I recommend this book. OK, below are the steps to create an ActiveX application using MFC 6.0. The procedure is specific to this application only, for other kinds of ActiveX controls, there may be something different.
If you succeed to create your "MyFirstTest" ActiveX control, you can try to right click the control and choose "Properties". You will find that there are some default properties. However, you will never see the properties "LinkedCell" as shown in my control's properties list and also other properties like "Max", "Min" .. etc. In order to add custom properties, you have to follow the steps below:
Max
long
Value
It is finished. So, whenever there is change of property's values (for example, the "Value" property), the Set method will be called (in fact, the Set method is a callback function). Therefore, the programmer should try to write code to handle the change of properties' values so that the drawing can be updated live. In doing so, we can just add a code "InvalidateControl()" to force the control to redraw itself by applying all the new values.
Set
InvalidateControl()
OK, it is a point of interest. I like mathematics and physics very much. However ... in HK, it is hard for me to choose the way of being a pure science student. The reason why I chose computing is that ... I even didn't know how to turn off a PC when I was in my last year of high school...
The circle above is the simple layout of the dashboard. What is interesting is that it will not always be a circle, it can be an ellipse when the user drags the control as a rectangle. So, by using a simple ellipse formula, we can calculate the "a" and "b" values and pass them to the GDI+ function to draw the ellipse.
For the arrow, there is a rule that the arrow angle θ is always kept constant. And for me, as a programmer, I have to know three points in order for the GDI+ function to draw an arrow. The three points are (Px, Py), two tangent points (Ux, Uy) (<- there are two points of Ux, Uy). So, what is the known value here? What is the unknown here?
Known Value:
Unknown Value:
So, after long calculation, I derived the following equation:
Uy = Py * sin<SUP>2</SUP>(/2) + Cy * cos<SUP>2</SUP>(/2) ±
(1/2) * sin() * √(L2 – (Cy – Py)2)
How about Ux? I leave it to you as an exercise here ...
That's all, thank you. | http://www.codeproject.com/Articles/8639/MFC-GDI-ActiveX-Arrow-Control-For-Excel?PageFlow=FixedWidth | CC-MAIN-2016-36 | refinedweb | 1,785 | 63.59 |
There are a number of ways to implement the effect of fog with modern real time rendered graphics. This blog post will explain how to render fog that has varying density, based on a function of X,Y,Z location in space, like in the picture above.
Faked Fog
One way is to “fake it” and do something like set the color of a pixel on an object to be based on it’s height. For instance you might say that pixels with a y axis value above 15 are unfogged, pixels with y axis values between 15 and 10 progressively get more fogged as they get closer to 10, and pixels with y axis values less than 10 are completely fogged. That can make some fog that looks like this:
A strange side effect of doing that though, is if you go down “into” the fog, and look out of the fog, things that should be fogged won’t. For instance, looking up at a mountain from inside the fog, the mountain won’t be fogged at all, even though it should be because you are inside of the fog.
A better way to do it, if you intend for the camera to be able to go into the fog, is to calculate a fogging amount for a pixel based on how far away it is from the view point, and how dense the fog is between the view point and the destination point.
If you are doing ray based rendering, like ray tracing or ray marching, you might find yourself trying to find how much fog is between points that don’t involve the view point – like if you are calculating the reflection portion of a ray. In this case, you are just finding out how much fog there is between the point where the reflection happened and the closest intersection. You can consider the point of reflection as the “view point” for the purpose of fogging.
Sometimes, the entire scene might not be in fog. In this case, you have to find where the fog begins and ends, instead of the total distance between the view point and the destination point.
In any case, the first thing you need to do when fogging is figure out the point where the fog begins, and the point where the fog ends. Then, you can figure out how much fog there is based on how the fog density works.
Constant Density Fog
The simplest sort of fog is fog that has the same density all throughout it.
What you do in this case is just multiply the fog density by the distance spent in the fog to come up with a final fog value.
As an example, your fog density might be “0.04” and if you are fogging a pixel 10 units away, you multiply density by distance. FogAmount = 0.04 * 10.0 = 0.4.
Doing this, you know the pixel should be 40% fogged, so you interpolate the pixel’s color 40% towards the fog color. You should make sure to clamp the fog amount to be between 0 and 1 to avoid strange visual anomolies.
The image below shows a constant fog density of 0.04.
Here’s an image of the same constant density fog as viewed from inside the fog:
A problem with constant fog density though, is that if you view it from edge on, you’ll get a very noticeable hard edge where the fog begins, like you can see in the image below:
Linear Density Fog
With linear fog density, the fog gets denser linearly, the farther you go into the fog.
With a fog plane, you can get the density of the fog for a specified point by doing a standard “distance from plane to point” calculation and multiplying that by how much the fog density grows per unit of distance. If your plane is defined by A*x+B*y+C*y+D = 0, and your point is defined as X,Y,Z, you just do a dot product between the plane and the point, giving the point a W component of one.
In other words…
FogDensity(Point, Plane) = (Plane.NormX * Point.X + Plane.NormY * Point.Y + Plane.NormZ * Point.Z + Plane.D * 1.0) * FogGrowthFactor
Here’s a picture of linear fog with a fog growth factor of 0.01:
The same fog viewed from the inside:
And lastly, the fog viewed edge on to show that the “hard line” problem of linear fog is gone (dramatic difference isn’t it?!):
Analytic Fog Density – Integrals
Taking a couple steps further, you might want to use equations to define fog density with some function FogDensity = f(x,y,z,).
How could you possibly figure out how much fog there is between two given points when the density between them varies based on some random function?
One way would be to take multiple samples along the line segment between the view point and the destination point, and either calculate the fog amount in each section, or maybe average the densities you calculate and multiply the result by the total distance. You might have to take a lot of samples to make this look correct, causing low frame rate, or accepting low visual quality as a compromise.
If you look at the graphs for the previous fog types, you might notice that we are trying to find the area under the graphs between points A and B. For constant density fog, the shape is a rectangle, so we just multiply width (time in fog) by height (the constant fog density) to get the fog amount. For linear density fog, the shape is a trapezoid, so we use the trapezoid area formula which is height (in this case, the distance in the fog) times the sum of the base lengths (the fog densities at points A and B) divided by 2.
How can we get the area under the graph between A and B for an arbitrary formula though? Well, a way exists luckily, using integrals (thanks to my buddy “Danny The Physicist” for educating me on the basics of integrals!).
There’s a way to transform a formula to get an “indefinite integral”, which itself is also a formula. I won’t go into the details of how to do that, but you can easily get the indefinite integral of a function by typing it into Wolfram Alpha.
Once you have the indefinite integral (let’s call it G(x)) of the fog density formula (let’s call it F(x)), if you calculate G(B) – G(A), that will give you the area under the graph in F(X) between A and B. Yes, seriously, that gives us the area under the graph between our points, thus giving us the amount of fog that exists between the two points for an arbitrary fog density function!
Note that when you plug a value into the indefinite integral and get a number out, that number is called the definite integral.
Analytic Fog Density – Implementation Details
Now that the theory is worked out let’s talk about implementation details.
First off, coming from an additive audio synthesis type of angle, I figured I might have some good luck adding together sine waves of various frequencies and amplitudes, so I started with this:
sin(x*F) * A
F is a frequency multiplier that controls how long the sine wave is. A is an amplitude multiplier that controls how dense the fog gets max.
Next, I knew that I needed a fog density function that never goes below zero, because that would mean if you looked through a patch of negative fog density, it would make the other fog you were looking through be less dense. That is just weird, and doesn’t exist in reality (but maybe there is some interesting visual effect hiding in there somewhere??), so the formula evolved to this, making sure the function never went below zero:
(1 + sin(x*F)) * A
Plugging that equation into wolfram alpha, it says the indefinite integral is:
(x – (cos(x*F)) / F) * A
You can check that out here:
Wolfram Alpha: (1 + sin(x*F)) * A.
It’s also kind of fun to ask google to graph these functions so you can see what they do to help understand how they work. Here are the graphs for A = 0.01 and F = 0.6:
Fog Density: graph (1 + sin(x*0.6)) * 0.01
Indefinite Integral: graph (x – (cos(x*0.6)) / 0.6) * 0.01
So, if you have point A and B where the fogging begins and ends, you might think you can do this to get the right answer:
FogAmount = G(B.x) – G(A.x)
Nope! There’s a catch. That would work if A and B had no difference on the y or z axis, but since they probably do, you need to jump through some hoops. In essence, you need to stretch your answer across the entire length of the line segment between A and B.
To do that, firstly you need to get that fog amount down to unit length. You do that by modifying the formula like so:
FogAmount = (G(B.x) – G(A.x)) / (B.x – A.x)
This also has a secondary benefit of making it so that your fog amount is always positive (so long as your fog density formula F(X) can’t ever go negative!), which saves an abs() call. Making it always positive ensures that this works when viewing fog both from the left and the right.
Now that we have the fog amount down to unit length, we need to scale it to be the length of the line segment, which makes the formula into this:
FogAmount = (G(B.x) – G(A.x)) * Length(B-A)/(B.x – A.x)
That formula will now give you the correct fog amount.
But, one axis of fog wasn’t enough to look very good, so I wanted to make sure and do one sine wave on each axis. I used 0.01 amplitude for each axis, but for the X axis i used a frequency of 0.6, for the Y axis i used a frequency of 1.2 and for the Z axis i used a frequency of 0.9.
Also, I wanted to give a little bit of baseline fog, so I added some constant density fog in as well, with a constant density of 0.1.
As a bonus, I also gave each axis a “movement factor” that made the sine waves move over time. X axis had a factor of 2.0, Y axis had a factor of 1.4 and Z axis had a factor of 2.2.
Putting all of this together, here is the final fog equation (GLSL pixel shader code) for finding the fog amount between any two points at a specific point in time:
//======================================================================================= float DefiniteIntegral (in float x, in float amplitude, in float frequency, in float motionFactor) { // Fog density on an axis: // (1 + sin(x*F)) * A // // indefinite integral: // (x - cos(F * x)/F) * A // // ... plus a constant (but when subtracting, the constant disappears) // x += iGlobalTime * motionFactor; return (x - cos(frequency * x)/ frequency) * amplitude; } //======================================================================================= float AreaUnderCurveUnitLength (in float a, in float b, in float amplitude, in float frequency, in float motionFactor) { // we calculate the definite integral at a and b and get the area under the curve // but we are only doing it on one axis, so the "width" of our area bounding shape is // not correct. So, we divide it by the length from a to b so that the area is as // if the length is 1 (normalized... also this has the effect of making sure it's positive // so it works from left OR right viewing). The caller can then multiply the shape // by the actual length of the ray in the fog to "stretch" it across the ray like it // really is. return (DefiniteIntegral(a, amplitude, frequency, motionFactor) - DefiniteIntegral(b, amplitude, frequency, motionFactor)) / (a - b); } //======================================================================================= float FogAmount (in vec3 src, in vec3 dest) { float len = length(dest - src); // calculate base fog amount (constant density over distance) float amount = len * 0.1; // calculate definite integrals across axes to get moving fog adjustments float adjust = 0.0; adjust += AreaUnderCurveUnitLength(dest.x, src.x, 0.01, 0.6, 2.0); adjust += AreaUnderCurveUnitLength(dest.y, src.y, 0.01, 1.2, 1.4); adjust += AreaUnderCurveUnitLength(dest.z, src.z, 0.01, 0.9, 2.2); adjust *= len; // make sure and not go over 1 for fog amount! return min(amount+adjust, 1.0); }
More Info
I ended up only using one sine wave per axis, but I think with more sine waves, or perhaps different functions entirely, you could get some more convincing looking fog.
At some point in the future, I’d like to play around with exponential fog density (instead of linear) where the exponential power is a parameter.
I also think that maybe squaring the sine waves could make them have sharper density changes perhaps…
One thing that bugs me in the above screenshots is the obvious “hard line” in both constant and linear fog where it seems fog crosses a threshold and gets a lot denser. I’m not really sure how to fix that yet. In traditional rasterized graphics you could put the fog amount on a curve, to give it a smoother transition, but in ray based rendering, that could make things a bit odd – like you could end up with an exponential curve butting up against the start of a different exponential curve (due to reflection or refraction or similar). The fog density would end up looking like log graph paper which would probably not look so great – although honestly I haven’t tried it to see yet!
If you have any questions, or feedback about improvements you know about or have discovered in any of the above, post a comment and let me know!
Here’s a good read on fog defined by a plane, that also gets into how to make branchless calculations for the fog amounts.
Unified Distance Formulas for Halfspace Fog
Interactive ShaderToy.com demo with GLSL pixel shader source code that you can also edit in real time with WebGL:
| http://blog.demofox.org/2014/06/22/analytic-fog-density/ | CC-MAIN-2017-22 | refinedweb | 2,391 | 64.34 |
Often in web apps, tests are very dependent on the state set up by previous tests. If one test fails (e.g. "follow the link to the admin page") then it's likely there will be many more failures. This module aims to alleviate this problem, as well as ...SARTAK/Test-WWW-Declare-0.02 - 12 Oct 2008 22:58:09
WWW: bundle contains all of Theory's most-used CPAN modules. These are essentials whenever he builds a new system....DWHEELER/Bundle-Theory-1.08 - 19 Jun 2011 04:45:49 GMT - Search in distribution
A Status Badge is dynamically generated image that provide different information relating to a project, such as coverage, test, build, and can be found in many GitHub repositories. This module generates the markup necessary to include this badges in ...RIVOTTI/WWW-StatusBadge-0.0.2 - 13 Jan 2015 11:24:30
This task contains all distributions under the POE namespace....APOCAL/Task-POE-All-1.102 - 09 Nov 2014 11:07:41 (1 review) - 15 Feb 2003 19:33:11 GMT - Search in distribution...RJBS/perl-5.22.0 (6 reviews) - 01 Jun 2015 17:51:59 GMT - Search in distribution
DOY/Task-BeLike-DOY-0.12 - 18 Mar 2014 03:39:47 GMT - Search in distribution | https://metacpan.org/search?q=Test-WWW-Declare | CC-MAIN-2015-27 | refinedweb | 213 | 57.57 |
In this guide, you’ll learn how to log data with the ESP8266 NodeMCU8266 NodeMCU: Getting Started with Firebase (Realtime Database)
-82668266 gets temperatrure, humidity and pressure from the BME280 sensor.
- It gets epoch time right after gettings the readings (timestamp).
- The ESP826682668266 board using the Arduino core. So, make sure you have the ESP82668266).826682668266.
6) ESP8266 Datalogging (Firebase Realtime Database)
In this section, we’ll program the ESP82668266 board (read best ESP8266 boards comparison);
-8266 SCL (GPIO 5 (D1)) and SDA (GPIO 4 (D2)) pins, as shown in the following schematic diagram.
Not familiar with the BME280 with the ESP8266? Read this tutorial: ESP82668266:8266: <Firebase_ESP_Client.h> #include <Wire.h> #include <Adafruit_Sensor.h> #include <Adafruit_BME280.h> #include <NTPClient.h> #include <WiFiUdp; FirebaseJson json; // Define NTP Client to get time WiFiUDP ntpUDP; NTPClient timeClient(ntpUDP, "pool.ntp.org"); // Variable to save current epoch time int timestamp; //Client.update(); unsigned long now = timeClient.getEpochTime(); return now; } void setup(){ Serial.begin(115200); // Initialize BME280 sensor initBME(); initWiFi(); timeClient.begin(); // ESP8266WiFi.h library to connect the ESP8266 to the internet, the Firebase_ESP_Client.h library to interface the board with Firebase, the Wire, Adafruit_Sensor, and Adafruit_BME280 to interface with the BME280 sensor, and the NTPClient and WiFiUdp libraries to get the time.
#include <Arduino.h> #include <ESP8266WiFi.h> #include <Firebase_ESP_Client.h> #include <Wire.h> #include <Adafruit_Sensor.h> #include <Adafruit_BME280.h> #include <NTPClient.h> #include <WiFiUdp.
Define the NTP client to get time:
// Define NTP Client to get time WiFiUDP ntpUDP; NTPClient timeClient(ntpUDP, "pool.ntp.org");
The timestamp variable will be used to save time (epoch time format).
int timestamp;
To learn more about getting epoch time with the ESP8266.
Then, create an Adafruit_BME280 object called bme. This automatically creates a sensor object on the ESP8266Client.update(); unsigned long now = timeClient.getEpochTime(); return now; }
setup()
In the setup(), initialize the Serial Monitor for debugging purposes at a baud rate of 115200.
Serial.begin(115200);
Call the initBME() function to initialize the BME280 sensor.
initBME();
Call the initWiFi() function to initialize WiFi.
initWiFi();
Initialize the time client:
timeClient.begin();8266 NodeMCU8266.8266 with our resources:
Thanks for reading.
11 thoughts on “ESP8266 NodeMCU Data Logging to Firebase Realtime Database”
I wish this also had how to do this with deep sleep enabled
Great code, as long as the esp8266+BME280 of the device is placed in the detection environment, I can run Firebase real-time data anywhere, watch the detection: temperature/humidity/atmospheric pressure, thank you for sharing the code👍
Hi,
Excellent tutorial!
I tried replicating it on my NodeMCU (ESP8266), but I got an error.
../Arduino/libraries/Firebase_Arduino_Client_Library_for_ESP8266_and_ESP32/src/Utils.h:1301:68: error: ‘schedule_function’ was not declared in this scope
{ schedule_function(callback); });
I don’t know where this function was supposed to be declared.
Could you help me?
Thanks!
Hi.
Are you using Arduino IDE or VS Code?
What is the version of the library that you are using?
Regards,
Sara
Hi.
I am using the Arduino IDE 1.8.19 and Firebase Arduino Client Library for ESP8266 and ESP32 ver. 3.0.1
Try downgrading the library to version 2.5.5 and check if that solves the issue.
Regards,
Sara
It worked just fine with version 2.5.5!
Thank you!
Regards,
Ricardo
I want to use Arduino Nano 33 BLE sense for this project, just wondering if I can use Firebase Arduino Client Library for ESP8266 and ESP32 for this board ?
Hi.
I don’t think it is compatible.
But you can try it and then you’ll see.=
Hi Great Code! I am not using an authentication for my project but i do need the timestamps in my sensor reading. How can i add it? | https://randomnerdtutorials.com/esp8266-data-logging-firebase-realtime-database/ | CC-MAIN-2022-21 | refinedweb | 617 | 61.02 |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
im writing a visualiser and need save and load settings here is a quick save file function.
still working on the string save function and open function.
got this from stackoverflow works to save file to Android internal storage SDCARD.
import android.os.Environment; // need this for Environment Android stuff
public void SaveFile() {
try { String filename="visualiser.txt"; String directory = new String(Environment.getExternalStorageDirectory().getAbsolutePath() ); save(directory + "/" + filename); } catch (Exception e) {
} }
just call SaveFile(); or what ever you want to call it... you will need to set sketch permissions to WRITE_EXTERNAL_STORAGE for file io.
anyone know of a simple method to read write load file into a string.
seen a few methods but it seems overly complicated. createbuffer writebuffer etc.
im no expert in android or java so dont ask me how this works. but its sort of easy to follow. ( declare name/find path/savefile/catch error etc).
Answers
added a bit more to save any value using str() to string.
public void SaveFile() {
try { String filename="visualiser.txt";
String words = "Zmod"+str(Zmod)+"Mmod"+str(Mmod)+"Smod"+str(Smod)+"Fmod"+str(Fmod); String[] list = split(words, ' ');
}
this saved the file in internal sdcard with my presets..... I don't know why this wont post code properly on here as usual.
import android.os.Environment;
public void SaveFile() {
try {
String filename="visualiser.txt";
}
public void LoadFile() {
try { String filename="visualiser.txt"; String directory = new String(Environment.getExternalStorageDirectory().getAbsolutePath() );
String[] load = loadStrings(directory+ "/"+ filename); for (LC=0; LC<load.length; LC++) { //for loop load to global array. dataload[LC]=load[LC]; //dataload global string, load local string. }
dataloadlength = load.length; // this is for length number of lines. } catch (Exception e) {} }
and that's it SAVEFILE WRITEFILE LOADFILE Android....took me a day but got it at last.....!!!!
it still wont post code correctly on here don't know why but this works great tested on android 4.1
THAT S EVIDENT
Please format your code | https://forum.processing.org/two/discussion/13877/android-savefile-openfile-writefile-processing-3-0-1-the-easy-way | CC-MAIN-2021-10 | refinedweb | 345 | 61.43 |
Created on 2014-10-29 10:14 by Tim.Graham, last changed 2016-07-14 03:37 by python-dev. This issue is now closed.
I noticed some failing Django tests on Python 3.2.6 the other day. The regression is caused by this change:
Behavior before that commit (and on other version of Python even after that commit):
>>> from http.cookies import SimpleCookie
>>> SimpleCookie("Set-Cookie: foo=bar; Path=/")
<SimpleCookie:
New broken behavior on Python 3.2.6:
>>> from http.cookies import SimpleCookie
>>> SimpleCookie("Set-Cookie: foo=bar; Path=/")
<SimpleCookie: >
Python 3.2.6 no longer accepts the "Set-Cookie: " prefix to BaseCookie.load:
>>> SimpleCookie("Set-Cookie: foo=bar; Path=/")
<SimpleCookie: >
>>> SimpleCookie("foo=bar; Path=/")
<SimpleCookie:
This issue doesn't affect 2.7, 3.3, or 3.4 because of (this commit wasn't backported to 3.2 because that branch is in security-fix-only mode).
I asked Berker about this and he suggested to create this issue and said, "If Georg is OK to backout the commit I can write a patch with additional test cases and commit it."
He also confirmed the regression as follows:
I've tested your example on Python 2.7.8, 3.2.6, 3.3.6, 3.4.2, 3.5.0 (all unreleased development versions - they will be X.Y.Z+1) and looks like it's a regression.
My test script is:
try:
from http.cookies import SimpleCookie
except ImportError:
from Cookie import SimpleCookie
c = SimpleCookie("Set-Cookie: foo=bar; Path=/")
print(c)
Here are the results:
Python 2.7.8:
Set-Cookie: foo=bar; Path=/
Python 3.5.0:
Set-Cookie: foo=bar; Path=/
Python 3.4.2:
Set-Cookie: foo=bar; Path=/
Python 3.3.6:
Set-Cookie: foo=bar; Path=/
[45602 refs]
Python 3.2.6:
[38937 refs]
Is it a normal use of SimpleCookie? The docs don't seem to imply it:
"""
>>> C = cookies.SimpleCookie()
>>> C.load("chips=ahoy; vienna=finger") # load from a string (HTTP header)
"""
In any case, it's up to Georg to decide. But changeset 572d9c59a1441c6f8ffb9308824c804856020e31 fixes a security issue reported to security@python.org (the report included a concrete example of how to exploit it under certain conditions).
Can you give a pointer to the failing Django test, by the way?
I wasn't sure if it was expected behavior or not. I'm attaching a file with the list of failing tests on Django's master.
Perhaps more useful is a reference to the problematic usage in Django:
That logic was added to fix.
Ah, so it's about round-tripping between SimpleCookie.__str__() and SimpleCookie.__init__(). That sounds like a reasonable behaviour to preserve (and easier than parsing arbitrary Set-Cookie headers). IMO we should also add for tests for it in other versions.
OK, so there are two root issues here:
* was supposed to prevent.)
* BaseCookie doesn't roundtrip correctly when pickled with protocol >= 2. This should be fixed in upcoming bugfix releases.
I would advise Django to subclass SimpleCookie and fix the pickling issue, which is not hard (see attached diff).
Thank-you Georg; I believe I was able to fix some of the failures by patching Django as you suggested.
However, I think I found another issue due to #16611 (support for httponly/secure cookies) not being backported to Python 3.2. The issue is that any cookies that appear after one that uses httponly or secure are dropped:
>>> from http.cookies import SimpleCookie
>>> c = SimpleCookie()
>>> c['a'] = 'b'
>>> c['a']['httponly'] = True
>>> c['d'] = 'e'
>>> out = c.output(header='', sep='; ')
>>> SimpleCookie(out)
<SimpleCookie:
Here's another example using the 'domain' option to show the same flow from above working as expected:
>>> c = SimpleCookie()
>>> c['a'] = 'b'
>>> c['a']['domain'] = 'foo.com'
>>> c['d'] = 'e'
>>> out = c.output(header='', sep='; ')
>>> SimpleCookie(out)
<SimpleCookie:
It seems to me this may warrant backporting httponly/secure support to Python 3.2 now that cookie parsing is more strict (unless Django is again relying on incorrect behavior).
Thanks, this is indeed a regression.
FYI, I created #22775 and submitted a patch for the issue that SimpleCookie doesn't pickle properly with HIGHEST_PROTOCOL.
Georg, how do want to proceed with this issue? Should we backport #16611 (support for parsing secure/httponly flag) to 3.2 to fix this regression and then create a separate issue to fix the lax parsing issue on all versions?
That seems like the best course of action.
The patch from #16611 applies cleanly to 3.2. I added a mention in Misc/NEWS and confirmed that all tests pass.
I also created #22796 for the lax parsing issue.
Patch updated to fix conflict in NEWS. Could we have it committed to ensure it gets fixed in the next 3.2 released?
Patch rebased again after cookie fix from #22931.
Given the inactivity here, I guess the patch won't be applied before Python 3.2 is end-of-life so I'm going to close the ticket.
I will commit this to the 3.2 branch today.
My understanding is that there is a commit hook that prevents pushing to the 3.2 branch, so that Georg needs to do this. I've applied the patch and run the tests myself, and agree that it passes (as in, none of the test failures I see are related to cookies). This isn't set to release blocker...should it be (ie: since this is the last release do we want to make sure it gets in)?
New changeset d22fadc18d01 by R David Murray in branch '3.2':
#22758: fix regression in handling of secure cookies.
Oops. I guess there's no commit hook after all.
New changeset 1c07bd735282 by R David Murray in branch '3.3':
#22758 null merge
New changeset 5b712993dce5 by R David Murray in branch '3.4':
#22758 null merge
New changeset 26342c9e8c1d by R David Murray in branch '3.5':
#22758 null merge
New changeset ce140ed0a56c by R David Murray in branch 'default':
#22758 null merge
New changeset a0bf31e50da5 by Martin Panter in branch '3.2':
Issue #22758: Move NEWS entry to Library section | https://bugs.python.org/issue22758 | CC-MAIN-2021-31 | refinedweb | 1,023 | 68.36 |
OpenShift makes it easy to deploy your containers, but it can also impact your development cycle. The core problem is that containers running in a Kubernetes cluster are running in a different environment than the development environment, for example, on your laptop. Your container may talk to other containers running in Kubernetes or rely on platform features like volumes or secrets, and those features are not available when running your code locally.
So, how can you debug them? How can you get a quick code/test feedback loop during the initial development?
In this blog post we'll demonstrate how you can have the best of both worlds, that is, the OpenShift runtime platform and the speed of local development. We will be using an open source tool called Telepresence..
Preparation
Before you begin, you will need to:
- Install Telepresence.
- Make sure you have the oc command line tool installed.
- Have access to a Kubernetes or OpenShift cluster, for example, using Minishift.
Running in OpenShift
Let's say you have an application running inside OpenShift; you can start one like so:
$ oc new-app --docker-image=datawire/hello-world --name=hello-world
$ oc expose service hello-world
You'll know it's running once the following shows a pod with
Running status that doesn't have "deploy" in its name:
$ oc get pod | grep hello-world
hello-world-1-hljbs 1/1 Running 0 3m
To find the address of the resulting app, run the following command:
$ oc get route hello-world
NAME HOST/PORT
hello-world example.openshiftapps.com
In the above output the address is, but you will get a different value. It may take a few minutes before the route goes live; in the interim you will get an error page. If you do, wait a minute and try again. Once it's running you can send a query and get back a response:
$ curl
Hello, world!
Remember in the above setup that you need to substitute the real address for that to work.
Local development
The source code for the above service looks mostly like the code below, except that this code has been modified slightly. It now has a new version which returns a different response. Create a file called
helloworld.py on your machine with this code:
from http.server import BaseHTTPRequestHandler, HTTPServer
class RequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/plain')
self.end_headers()
self.wfile.write(b"Hello, world, I am changing!\n")
return
httpd = HTTPServer(('', 8000), RequestHandler)
httpd.serve_forever()
Typically, testing this new version of your code would require pushing to upstream, rebuilding the image, redeploying the code, and so on and this can take a bit. With Telepresence, you can just run a local process and route traffic to it, allowing you a quick develop/test cycle without going through slow deploys.
We'll swap out the
hello-world deployment for a Telepresence proxy, and then run our updated server locally in the resulting shell:
$ telepresence --swap-deployment hello-world --expose 8000
@myproject/192-168-99-101:8443/developer|$ python3 helloworld.py
In another shell session we can query the service we already started. This time requests will be routed to our local process, which is running the modified version of the code:
$ oc get route hello-world
NAME HOST/PORT
hello-world example.openshiftapps.com
$ curl
Hello, world, I am changing!
The traffic is being routed to the local process on your machine. Also, note that your local process can now access other services, just as if it was running inside the cluster.
When you exit the Telepresence shell the original code will be swapped back in.
Wrapping Up
Telepresence gives you the quick development cycle and full control over your process you are used to from non-distributed computing; that is, you can use a debugger, add print statements, or use live-reloads if your web server supports it. At the same time, your local processes have full networking access, both incoming and outgoing, as if it were running in your cluster, as well as have access to environment variables and—admittedly a little less transparently—to volumes.
To get started, check out the OpenShift quick start or tutorial on debugging a Kubernetes service locally. If you're interested in contributing, read how it works, the development guide, and join the Telepresence Gitter chat.
Categories
Python, How-tos, Products | https://www.openshift.com/blog/telepresence-local-development | CC-MAIN-2021-17 | refinedweb | 736 | 61.97 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
[V8/9] Customer record approval
Any possibility to add workflow for customer module?
This workflow having purpose to make approval for new record and modification that's applied.
Before customer record have approved state, customer will not able to started create quotation, opportunity etc.
Thanks.
Andreas
Hi,
You can add workflow to res.partner model. Inorder to have only approved customers in quotations, opportunity etc, later you can domain filter the many2one field using state = approved.
For eg, am adding just two states "New" and "Approved". Please try the following code:
in your python file:
class res_partner(models.Model):
_inherit = 'res.partner'
STATE_SELECTION = [
('new', 'New'),
('approved', 'Approved'),
]
state = fields.Selection(STATE_SELECTION, 'Status', readonly=True,
help="New customer created. "
"New Customer approved. ",
select=True,default='new')
@api.multi
def write(self, vals):
if self.state == 'approved':
vals.update({'state':'new'})
return super(res_partner, self).write(vals)
@api.multi
def approve_customer(self):
self.write({'state':'approved'})
in your workflow xml file:
<?xml version="1.0" encoding="utf-8"?><openerp>
<data>
<record id="customer_flow" model="workflow">
<field name="name">Custmer Workflow</field>
<field name="osv">res.partner</field>
<field name="on_create">True</field>
</record>
<record id="act_new" model="workflow.activity">
<field name="wkf_id" ref="customer_flow"/>
<field name="flow_start">True</field>
<field name="name">new</field>
</record>
<record id="act_approved" model="workflow.activity">
<field name="wkf_id" ref="customer_flow"/>
<field name="name">approved</field>
<field name="flow_stop">True</field>
</record>
<record id="trans_new_confirmed" model="workflow.transition">
<field name="act_from" ref="act_new"/>
<field name="act_to" ref="act_approved"/>
<field name="signal">customer_confirm</field>
</record>
</data>
</openerp>
in your xml view file:
<record id="view_partner_form_with_states" model="ir.ui.view">
<field name="name">Add states to customer</field>
<field name="model">res.partner</field>
<field name="inherit_id" ref="base.view_partner_form"/>
<field name="arch" type="xml">
<sheet position="before">
<header>
<button name="approve_customer" states="new" type="object" string="Approve" class="oe_highlight" />
<field name="state" widget="statusbar" statusbar_visible="new,approved" statusbar_colors='{"new":"blue","approved":"blue"}' readonly="1" />
</header>
</sheet>
</field>
</record>
Hope this helps!
Hi Akhil P Sivan, that's great and thanks for your response. I already try your code, here my several question: 1. When I click approve button, there are no log for this action. For create or change, usually we saw log / note at the bottom side. 'Note by Administrator - 12:28 AM Partner created". How to add this function when click approved button. 2. When user change or edit the record, this record are back into New State or Draft state again. So, user need to approved those change, before continue. How to catch edit action or is there any mechanism to do that? 3. Is it possible when we have 2 workflow and 2 status bar. Let say for New record and approval process, 1 workflow. After new record approved, this become change request / modification, so there is 1 workflow again for Modification and Approval, also for status bar, will be shown modification and approval. Thanks and Regards, Andreas
1 more, do you have any idea to setup approval button for several group or user only. So for approval only appear when state New and user level Manager. Thanks.
Hi Andreas, you can catch the edit action either by the write() or a boolean field. So you can do the changes approval. Its possible modify the workflow like you said, this is just an example to show you how it can be possible. Just refer purchase_workflow.xml and try to modify this according to your needs. You can use security groups to make Approve button visible only for manager. If my answer helps you, please do upvote.
Andreas, I have updated the answer. So when you edit the state goes back to "draft", which you can approve later. To make it visible only to Manager, you can use groups attribute on the | https://www.odoo.com/forum/help-1/question/v8-9-customer-record-approval-92208 | CC-MAIN-2017-04 | refinedweb | 666 | 51.85 |
Another way is to keep another unique value inside the table to be updated and remember it. When needed, I add a second unique column to the table (in my case a char(64)) which is filled with the current timestamp and some md5 checksum. I select this value before the update, pass it along with the HTML form, and, before updating, I re-select the row to be updated and compare the keys. If the comparision fails, the user is presented a warning message, else I do the update (the user's data and a new generated stamp-value) with the primary key _and_ the original stamp in the where clause. Then I check if my new stamp made it to the table, or present another warning. A sample: Table atable: id int not null primary key stamp char(64) not null unique avalue int select id,stamp,avalue from atable where id=1 (select data for update) build html form with "id" and "stamp" as hidden values select stamp from atable where id=<the_id> if stamp(form) != stamp(db) error (e.g. start from beginning) else construct new_stamp update atable set avalue=<new>,stamp=<new_stamp> where id=<the_id> AND stamp=<old_stamp> select stamp from atable where id=<the_id> if stamp(db) != <new_stamp> error endif endif It's a bit of work, but it had never let me down. Thomas id INT NOT NULL AUTO_INCREMENT PRIMARY KEY Is what I use in the WHERE clause to update data ts CHAR(32) NOT NULL UNIQUE I > -----Ursprüngliche Nachricht----- > Von: Doug Semig [mailto:[EMAIL PROTECTED]] > Gesendet: Mittwoch, 18. April 2001 20:48 > An: [EMAIL PROTECTED] > Betreff: Re: [PHP-DB] Concurrent update to database (PostgreSQL or > MySQL) ?? > > > > > > > > > -- >]
AW: [PHP-DB] Concurrent update to database (PostgreSQL or MySQL) ??
Thomas Lamy Wed, 18 Apr 2001 13:53:00 -0700 | https://www.mail-archive.com/php-db@lists.php.net/msg02889.html | CC-MAIN-2018-43 | refinedweb | 305 | 58.21 |
To generate an auxiliary filter, you first define a filtee on which the filtering is applied. The following example builds a filtee filtee.so.1, supplying the symbol foo.
$ cat filtee.c char *foo() { return("defined in filtee"); } $ cc -o filtee.so.1 -G -K pic filtee.c
Auxiliary filtering can be provided in one of two ways. To declare all of the interfaces offered by a shared object to be auxiliary filters, use the link-editor's –f option..
$ cat filter.c char *bar = "defined in filter"; char *foo() { return ("defined in filter"); } $ LD_OPTIONS='-f filtee.so.1' \ cc -o filter.so.1 -G -K pic -h filter.so.1 -R. filter.c $ elfdump -d filter.so.1 | egrep "SONAME|AUXILIARY" [2] SONAME 0xee filter.so.1 [3] AUXILIARY 0xfb filtee.so.1.
$ cat main.c extern char *bar, *foo(); void main() { (void) printf("foo is %s: bar is %s\n", foo(), bar); } $ cc -o prog main.c -R. filter.so.1 $ prog foo is defined in filtee: bar is defined in filter
In the following example, the shared object filter.so.2 defines the interface foo, to be an auxiliary filter on the filtee filtee.so.1.
$ cat filter.c char *bar = "defined in filter"; char *foo() { return ("defined in filter"); } $ cat mapfile $mapfile_version 2 SYMBOL_SCOPE { global: foo { AUXILIARY=filtee.so.1 }; }; $ cc -o filter.so.2 -G -K pic -h filter.so.2 -M mapfile -R. filter.c $ elfdump -d filter.so.2 | egrep "SONAME|AUXILIARY" [2] SONAME 0xd8 filter.so.2 [3] SUNW_AUXILIARY 0xfb filtee.so.1 $ elfdump -y filter.so.2 | egrep "foo|bar" [1] A [3] filtee.so.1 foo [10] D <self> bar.
$ cc -o prog main.c -R. filter.so.2 $ prog foo is defined in filtee: bar is defined in filter
If the filtee filtee.so.1 does not exist, the execution of prog results in foo and bar being obtained from the filter filter.so.2.
$ prog foo is defined in filter: bar is defined in filter Oracle Solaris OS to provide optimized functionality within hardware capability, and platform specific shared objects. See Capability Specific Shared Objects, Instruction Set Specific Shared Objects, and System Specific Shared Objects for examples. | https://docs.oracle.com/cd/E36784_01/html/E36857/chapter4-5.html | CC-MAIN-2019-30 | refinedweb | 369 | 61.43 |
All of dW
-----------------
AIX and UNIX
Information Mgmt
Lotus
Rational
Tivoli
WebSphere
-----------------
Java technology
Linux
Open source
SOA & Web services
Web development
XML
-----------------
dW forums
-----------------
alphaWorks
-----------------
All of IBM
Part 2
Document options requiring JavaScript are not displayed
Connect to your technical community
Help us improve this content
Level: Intermediate
Jeremy McGee (jeremy@mcgee.demon.co.uk), Independent IT Consultant
15 Jan 2004
In this two-part series, we show you how to use Borland Enterprise Core Objects (ECO) in Borland C#Builder Architect to build a powerful application for IBM DB2 Universal Database (UDB) that is powered by a UML model. This second article shows how ECO can quickly build complex user interfaces.
Introduction
Borland® C#BuilderTM Architect extends the development capabilities of C#Builder to cover model-driven development. This saves time coding business logic by automatically implementing part of a Unified Modeling Language (UML) software model.
In the first article, we looked at how the ECOTM designers and runtime framework in C#Builder Architect can help implement an application quickly. In this article, I’ll extend the simple application that we created in the first part of the article so that the user can manipulate the other object classes that we created. You’ll also see how to use the data integrity options available through associations.
A trial version of C#Builder Architect is available from the Borland Web site.
Adding people
Our UML diagram includes two lists of people, each associated with the departments in the DataGrid that we’ve placed on the form. Next, you’ll add two grids to the form and create a simple user interface to let the user enter the personal information.
Add two more grids to the main WinForm, making them wide enough for seven columns plus the left margin. Place two buttons below these grids, and set their Text properties to be Add Employee and Add Candidate. You’ll add the program code for these buttons later.
Name each of the grids, from top to bottom, dgDepartment, dgEmployee, and dgCandidate. We’ll be referring to the current record of the Department data grid to filter the employees and candidates.
Next, you’ll set up two new ExpressionHandles that will obtain the data for these grids. This time, rather than pointing directly to the whole collection of objects, we’ll filter the names by the appropriate department. To do this you’ll need a new component, the CurrencyManagerHandle.
The CurrencyManagerHandle returns the object that a data-bound control is pointing to. So, here you can use a CurrencyManagerHandle to find which department is currently selected by the Department grid. You can also use the CurrencyManagerHandle as the root handle for an ExpressionHandle.
From the Tool Palette, select a CurrencyManagerHandle and place it on the form. Set the properties to:
Name: cmhDepartment
RootHandle: ehDepartment
BindingContext: dgDepartment
That’s all that’s necessary. From this point on you can use the expression
(Department)cmhDepartment.Element.AsObject
to return the Department object to which the Department data grid points.
You can now set up the new ExpressionHandles for the Employee and Candidate grids. Drop two ExpressionHandles on the form and set their properties to:
Name: ehEmployee
RootHandle: cmhDepartment
Expression: employs
Name: ehCandidate
RootHandle: cmhDepartment
Expression: mightEmploy
As you’ll see, the RootHandle here is set to the CurrencyManagerHandle, not the root handle for the form. Cascading ECO handles in this way can be a very powerful technique.
Then you can set the DataSource property of each grid to point to the appropriate ExpressionHandle. Compile the application, and the columns should be completed automatically: (Note: the grids most likely won’t show the right columns until you compile.)
Next, add code to the two buttons so you can add contacts. For the Add Employee button, the code will look very similar to that used in the previous article to add a Department. This time, however, it’s necessary to set the department of the employee:
// Create a new Employee object
Employee theEmployee = new Employee(EcoSpace);
// Set the department of this employee to be the object selected
// by dgDepartment
theEmployee.worksFor = (Department)cmhDepartment.Element.AsObject;
And similarly, for the Add Candidate button:
Candidate theCandidate = new Candidate(EcoSpace);
// Set the department of this candidate to be the object selected
// by dgDepartment
theCandidate.mightWorkFor = (Department)cmhDepartment.Element.AsObject;
Notice a couple of useful features here. One is that the names used for the associations in the UML diagram (worksFor, mightWorkFor) show up as ‘properties’ of the Employee and Candidate objects. These properties return genuine, bona fide .NET objects in their own right. This makes it possible for you to use expressions like
theCandidate.mightWorkFor.Location
to find what location the candidate could work for. This can be used where, with DB2®, you’d use SELECT queries, except here you can use the native C# object types directly.
Compile the application and run it.
Make sure you have a department entered before you press either the Add Employee or Add Candidate buttons, otherwise your call to cmhDepartment.Element.AsObject will return a null object reference. A more sophisticated version of the application might check this and ‘gray out’ the buttons.
When you add an employee or candidate, you’ll see a new line open up in the appropriate data grid. Try entering two or three lines of data, then switch to another department and enter more lines of data. You should see the Employee and Candidate grids change automatically as you change the department row – this is an event that is automatically cascaded by the CurrencyManagerHandle component.
Fine-tuning the interface
You’ll notice that there’s a column set for the worksFor and mightWorksFor properties in the Employee and Candidate grids. The only data displayed here is the name of the object – there’s no default property for a Department object, so the object name is all that’s available.
To remove this column, close the application and return to the form designer for the main WinForm. The DataGrid that we’re using is just the same as any other .NET application, so you can set properties to explicitly choose the columns you need.
Select the dgEmployee grid, and choose the TableStyles property editor (under Data). The DataGridTableStyle Collection Editor is displayed. By default this is blank: press the Add button to add a new TableStyle based on the .NET defaults.
This is where you can change the appearance of the grid – the fonts, colors, whether alternating background lines are displayed, and so forth. The .NET DataGrid allows several tables to be displayed at once, but here we’re just using one, so there’s no need to add any others.
The particular property that you’ll want to set to adjust the columns is GridColumnStyles, at the bottom under Misc. Click on the property editor and you’ll see a second, nested collection editor appear, the DataGridColumnStyle editor.
Here we can explicitly select the properties for each column in the data grid. Add a DataGridTextBoxColumn for the first column, set the HeaderText property to Name, and the MappingName property to Name. Repeat for all the columns except the worksFor column.
Close the ColumnStyle editor, then the TableStyle editor. Your grid should now look much tidier. Here I’ve reworked the column headings to include spaces to make them more readable:
Adding an extra form
But what if we want to give the user a way to select the department that an employee works for through a regular Windows dialog? In this final step you’ll see how to use another WinForm in the application as a dialog box that can edit an Employee object.
The default blank ECO application that is created includes, by default, an EcoWinForm. Switch to this form, and you’ll see that it already includes RootHandle and ExpressionHandle components. We’ll use this as our dialog, and the first stage will be to save the form as GetEmployee.cs. Name the form as ewfEditEmployee and set its title to Enter Employee Details.
With this done, select the root handle rhRoot and point it to the model for the application by setting the EcoSpaceType property to HRApp.HRAppEcoSpace with the drop-down.
Note that, at a stroke, this makes the objects in the model available to ECO components on this form. If you’re accustomed to the concept of a data module in Delphi, and miss that feature in C#Builder, then many of the same capabilities of a centralized association with data can be achieved through ECOWinForms.
In this case, you’ll use the dialog to work with Employee objects only. Rather than constructing an expression handle and referring to this each time that we construct an OCL expression, we can set the default type for the root handle directly. Set the StaticValueTypeName to the class Employee through the Type Name Selector property editor:
With this done, the root handle itself will now expect its Element property to be an Employee object.
We can now construct the user interface. Add seven Label components, six TextBox components, a ComboBox and a Button to the form so it looks like this:
Set the DataBindings properties of each of the TextBox components so the Text points to the appropriate DataSource:
Now the form will automatically display the current Employee object that is referred to by the root handle of the form.
The advantage of just displaying a single record in this way is that you can now easily populate the combo box with the department names. To do this, you’ll need to use the expression handle that is already on the form. Select it, and rename it to ehDepartment.
Set the properties as you’d expect:
RootHandle: rhRoot (should already be set)
Expression: Department.allInstances
Then set the combo box properties as follows:
Name: cbDepartment
DataSource: ehDepartment
DisplayMember: Name
ValueMember: Name
Next, write the following code against the SelectedIndexChanged event for the combo box:
// Find out which object was selected
IElement selected =
(ehDepartment.Element as IObjectList)[cbDepartment.SelectedIndex];
if (selected == null) return;
Department selDepartment = (Department)selected.AsObject;
// Extract the employee instance from rhRoot
Employee thisEmployee = (Employee)rhRoot.Element.AsObject;
thisEmployee.worksFor = selDepartment;
Note that by default the EcoWinForm doesn’t include a reference to the assembly Borland.Eco.ObjectRepresentation, which is where IElement is defined. You’ll need to add
using Borland.Eco.ObjectRepresentation;
at the top of the source code unit before the namespace declaration.
The combo box should now update the value of the worksFor property in the Employee object that is referred to by the root handle.
Switch back to the form design, double-click the Close button, and enter the following line:
this.Close();
All that remains is to establish a way to load the GetEmployee form from the main application. A neat way to do this is to overload the constructor for the EcoWinForm itself.
If you scroll up to near the beginning of the source code unit GetEmployee.cs, you’ll see the default constructor for the form: it’s the public function ewfEditEmployee(HRAppEcoSpace EcoSpace). We’ll copy this function and add the Employee object we wish to edit as an extra parameter.
Copy the function and add the extra parameter; and before the AutoContainer line, set the root handle element. The entire function will look like this:
public ewfEditEmployee(HRAppEcoSpace ecoSpace, Employee ourEmployee)
{
//
// Required for Windows Form Designer support
//
InitializeComponent();
// Set EcoSpace (actually stores it in rhRoot.EcoSpace)
this.EcoSpace = ecoSpace;
// Set root handle to employee object passed in constructor
rhRoot.SetElement(ourEmployee.AsIObject());
// Hook up AutoContainer provider
new AutoContainerProvider(this, EcoSpace);
}
Now we can return to the main form, WinForm.cs. Add the following two lines to the Click handler for the Add Employee button:
ewfEditEmployee editForm = new ewfEditEmployee(EcoSpace, theEmployee);
editForm.ShowDialog();
You’ve now built a form that can edit an Employee object directly.
Compile the application and run it. When you add an employee now, the new dialog should be displayed.
As you enter data, you’ll see it appear in the grid below, indicating that you’re directly editing the object. Although this is buffered locally in memory in the EcoSpace before it is stored to DB2, you may prefer to use a ‘temporary’ Employee object in this situation.
Conclusion
The philosophy of developing an ECO application uses object-oriented concepts throughout. These may at first appear unfamiliar to DB2 developers, but the ECO ‘handle’ approach gives a very flexible way to build applications in a more productive way than through more conventional relational database techniques.
The small application you’ve built illustrates the main concepts of ECO – how to design a model, how to persist data to DB2, and how to build a fairly complex user interface. DB2 is well suited as a backing store for ECO applications as they typically need to retrieve or store a fairly large amount of data in one operation, which DB2 excels at.
About the author
Jeremy McGee started writing applications using BASIC on the Commodore PET. He fondly remembers typesetting a book-length publication on an early Apple Mac with a refrigerator-sized laser printer attached. Since then he's variously been a DEC VAX sysadmin, a technical support engineer for Borland Paradox, and was part of the team that launched Borland Delphi in Europe. Jeremy now runs? | http://www.ibm.com/developerworks/db2/library/techarticle/dm-0401mcgee/index.html | crawl-002 | refinedweb | 2,225 | 53.31 |
In a mature, highly structured development environment, it is sometimes desireable to exert a more fine-grained versioning model than that permitted by the default behavior. With the standard Perl version model, it is only possible to establish a max...JPEACOCK/version-Limit-0.0301 - 26 Jul 2007 14:53:11 GMT - Search in distribution
: Reads from the data sources in the current working directory's namespace, and updates the local class tree. This hits the data dictionary for the remote database, and gets changes there first. Those changes are then used to mutate the class tree. I...BRUMMETT/UR-0.43 - 03 Jul 2014 14:36:23-0.7.3 - 23 May 2015 03:14:53 GMT - Search in distribution
- File::CodeSearch - Search file contents in code repositories
- File::CodeSearch::Highlighter - Highlights matched parts of a line.WP::UserAgent - Web user agent class::nes.cfg - .nes.cfg Nes configuration files.
- Nes::Singleton - Single access interface to Nes.
- Nes::Obj::multi_step - Secure Multi Step Form.
- 1 more result from Nes »
PETDANCE/hwd-0.20 - 02 Mar 2006 02:39:55
INA/Char-GBK-1.05 - 14 Jun 2015 15:07:15 GMT - Search in distribution
INA/Char-UHC-1.05 - 14 Jun 2015 15:13 | https://metacpan.org/search?q=version-Limit | CC-MAIN-2015-27 | refinedweb | 205 | 60.82 |
Gaboury 0 Posted September 2, 2005 Share Posted September 2, 2005 (edited) Hi guys. I am very new to AutoIt and I would like to make any little program that could be useful... I started a little MP3 Player script but it is frikin hard and I don't know all the commands I need to. I know some of you guys must be kings in that domain but I am very new so if you give explanations please be precise. I don't want you to tell me what to write, I want to tell you what I should write and WHY. Because I want to learn. So up to now, I've made the "Notepad" tutorial and I've made it also a bit changed... I also made a two-programs launcher Pretty bad but still, it practices me. I have a lot of problems with the functions, and some other things... Here what my main problems are: I don't know how to set a variable WITHOUT having it executed... it could be useful to define a variable that I would like to use in a function but without that one executing itself before that function is called... I would like to have some advices about the "GUI" part because I really have many problems creating a nice GUI "Layout"... I read the tutorial included (the Help...) and it helped me a bit but there are some things that I don't understand because I usually speak French and some words in English used in the Help File are a bit complicated... I would also like to be able to put a nice background or anything on my layout. So if you have any advices about the GUI, the variables or anything, I will try to understand all what you guys can tell me to help because I really suck... I don't know if it can help but I know how to script for Counter-Strike so I have a LITTLE basic about scripting and everything but I really need help! Thanks - Gaboury. Note: Don't forget to explain why to do something, because just "copy-paste" doesn't raise me to another lvl of coding. If you are gonna make fun of me or tell me to do something without any explanations, I'd prefer you say nothing, cause it'll lead me nowhere. Thanks a lot. Here are some of the basic programs I made (not all fully-working...): 2 Programs Launcher... #include <GUIConstants.au3> GUICreate("Gaboury's First Program", 150, 150) Opt("GUIOnEventMode",1) GUISetState(@SW_SHOW) $notebutton = GUICtrlCreateButton ("Open Notepad", -1, -1) GUICtrlSetOnEvent($notebutton,"notepad") $firebutton = GUICtrlCreateButton ("Open FireFox", -1, 25) GUICtrlSetOnEvent($firebutton,"firefox") Func notepad() Run("Notepad.exe") EndFunc Func firefox() Run("C:\Program Files\Mozilla Firefox\firefox.exe") EndFunc While 1 $exitstuff = GUIGetMsg() If $exitstuff = $GUI_EVENT_CLOSE Then ExitLoop Then Exit Wend expandcollapse popupNotepad Tutorial... With autosave and changing save directory...also asking for deletion of the file... :P Only for me as it doesn't check for the path of the file, I just entered it and it deletes it if it's there..if it's not there it won't search for it so... Run("Notepad.exe") WinWaitActive("Sans titre - Bloc-notes") Send("Bonjour. Ceci est un test. Suis-je capable d'écrire en dessous?") Send("{ENTER}") Send("Mais mon dieu, ca marche.") Send("{ENTER}") Send("Essayons de le sauvegarder maintenant.") Send("^s") WinWaitActive("Enregistrer sous", "&Nom du fichier :") Send("c:\documents and settings\Jaimie\bureau\") Send("{ENTER}") Send("Test.txt") WinWaitActive("Enregistrer sous", "&Enregistrer") Send("{ENTER}") WinClose ("Test.txt - Bloc-notes") $deletebox = msgbox(4, "Delete File", "Veux tu supprimer le fichier qui vient d'être créé?") If $deletebox = 6 Then FileDelete("C:\Documents and Settings\Jaimie\bureau\test.txt") msgbox(0, "Deleted", "File Deleted") ElseIf $deletebox = 7 Then msgbox(0, "", "File didn't get deleted.") EndIf $mybox = msgbox(4, "Feedback", "Alors mon programme, tu l'aimes?") If $mybox = 6 then msgbox(0, "You Rock!", "You rock buddy!") elseif $mybox = 7 Then msgbox (0, "Crap", "You are an idiot!") endif Exit Winclose("Notepad1.exe") That's my quarter working MP3 player :P One song added to the list, and not even working... :/ #include <GUIConstants.au3> Opt("GUIOnEventMode",1) GUICreate("MP3 Player", 250, 250) GUISetState(@SW_SHOW) $roboto = GUICtrlCreateButton ("Mr. Roboto - 80's Styx", -1, -1) GUICtrlSetOnEvent($roboto, "robsure") GUIGetMsg () Func robsure() GUICreate("Are you sure?", 200, 100) GUICtrlCreateLabel("Are you sure you want to listen the song Mr. roboto?", -1, -1) $ok = GUICtrlCreateButton ("Yes", -1, 20) $no = GUICtrlCreateButton ("No", -1, 50) If $ok = 1 then SoundPlay ( "C:\Documents and Settings\Jaimie\Bureau\Playlists iTunes\80's Styx\Album inconnu\Mr. Roboto.mp3") ElseIf $no = 1 then Endif Endfunc While 1 $msg = GUIGetMsg() If $msg = $GUI_EVENT_CLOSE Then ProcessClose("AutoIt3.exe") Then ExitLoop Wend GUIDelete() Edited September 2, 2005 by Gaboury Link to post Share on other sites
Recommended Posts
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now | https://www.autoitscript.com/forum/topic/15363-anything/?tab=comments#comment-105469 | CC-MAIN-2022-21 | refinedweb | 856 | 65.73 |
Starting simple macro for debugging; I suppose I’m not alone in using println-debugging, that is debugging by inserting statements like:
println("After register; user = " + user + ", userCount = " + userCount)
running a test, and checking what the output is. Writing the variable name before the variable is tedious, so I wanted to write a macro which would do that for me; that is:
debug("After register", user, userCount)
should have the same effect as the first snippet (it should generate code similar to the one above).
Let’s see step-by-step how to implement such a macro. There’s a good getting started guide on the scala macros page, which I used. All code explained below is available on GitHub, in the scala-macro-debug project.
1. Project setup
To experiment comfortably we’ll need to setup a simple project first. We will need at least two subprojects: one for macros, and one for testing the macros. That is because the macros must be compiled separately and before and code that uses them (as they influence the compilation process).
Moreover, the macro subproject needs to have a dependency on scala-compiler, to be able to access the reflection and AST classes.
A simple SBT build file could look like this: Build.scala.
2. Hello World!
“Hello World!” is always a great starting point. So my first step was to write a macro, which would expand
hello() to
println("Hello World!") at compile-time.
In the macros subproject, we have to create a new object, which defines
hello() and the macro:
package com.softwaremill.debug import language.experimental.macros import reflect.macros.Context object DebugMacros { def hello(): Unit = macro hello_impl def hello_impl(c: Context)(): c.Expr[Unit] = { // TODO } }
There are a couple of important things here:
- we have to import
language.experimental.macros, to enable the macros feature in the given source file. Otherwise we’ll get compilation errors reminding us about the import.
- the definition of
hello()uses the
macrokeyword, followed by a method which implements the macro
- the macro implementation has two parameter lists: the first is the context (you can think about it as a compilation context), the second mirrors the parameter list of our method – here it’s empty. Finally, the return type must also match – however in the method we have a return type unit, in the macro we return an expression (which wraps a piece of an AST) of type unit.
Now to the implementation, which is pretty short:
def hello_impl(c: Context)(): c.Expr[Unit] = { import c.universe._ reify { println("Hello World!") } }
Going line by line:
- first we import the “universe”, which gives convenient access to AST classes. Note that the return type is
c.Expr– so it’s a path-dependent type, taken from the context. You’ll see that import in every macro.
- as we want to generate code which prints “Hello World!”, we need to create an AST for it. Instead of constructing it manually (which is possible, but doesn’t look too nice), Scala provides a
reifymethod (reify is also a macro – a macro used when compiling macros :) ), which turns the given code into an
Expr[T](expressions wrap an AST and its type). As
printlnhas type unit, the reified expression has type
Expr[Unit], and we can just return it.
Usage is pretty simple. In the testing subproject, write the following:
object DebugExample extends App { import DebugMacros._ hello() }
and run the code (e.g. with the
run command in SBT shell).
3. Printing out a parameter
Printing Hello World is nice, but it’s even nicer to print a parameter. The second macro will do just that: it will transform
printparam(anything) into
println(anything). Not very useful, and pretty similar to what we’ve seen, with two crucial differences:
def printparam(param: Any): Unit = macro printparam_impl def printparam_impl(c: Context)(param: c.Expr[Any]): c.Expr[Unit] = { import c.universe._ reify { println(param.splice) } }
The first difference is that the method accepts a parameter
param: Any. In the macro implementation, we have to mirror that – but same as with the return type, instead of
Any, we accept an
Expr[Any], as during compile-time we operate on ASTs.
The second difference is the usage of
splice. It is a special method of
Expr, which can only be used inside a
reify
call, and does kind of the opposite of reify: it embeds the given
expression into the code that is being reified. Here, we have
param which is an
Expr (that is, tree + type), and we want to put that tree as a child of
println; we want the value that is represented by
param to be passed to
println, not the AST.
splice called on an
Expr[T] returns a
T, so the reified code type-checks.
4. Single-variable debug
Let’s now get to our debug method. First maybe let’s implement a single-variable debug, that is
debug(x) should be transformed into something like
println("x = " + x).
Here’s the macro:
def debug(param: Any): Unit = macro debug_impl def debug_impl(c: Context)(param: c.Expr[Any]): c.Expr[Unit] = { import c.universe._ val paramRep = show(param.tree) val paramRepTree = Literal(Constant(paramRep)) val paramRepExpr = c.Expr[String](paramRepTree) reify { println(paramRepExpr.splice + " = " + param.splice) } }
The new thing is of course generating the prefix. To do that, we first turn the parameter’s tree into a
String. The built-in method
show does exactly that. A little note here; as we are turning an AST into a
String,
the output may look a bit different than in the original code. For vals
declared inside a method, it will return simply the val name. For class
fields, you’ll see something like
DebugExample.this.myField. For expressions, e.g.
left + right, you’ll see
left.+(right). Not perfect, but readable enough I think.
Secondly, we need to create a tree (by hand this time) representing a constant
String.
Here you just have to know what to construct, e.g. by inspecting trees
created by reification (or reading Scala compiler’s source code ;) ).
Finally, we turn that simple tree into an expression of type
String, and splice it inside the
println. Running for example such code:
object DebugExample extends App { import DebugMacros._ val y = 10 def test() { val p = 11 debug1(p) debug1(p + y) } test() }
outputs:
p = 11 p.+(DebugExample.this.y) = 21
5. Final product
Implementing the full debug macro, as described above, introduces only one new concept. The full source is a bit long, so you can view it on GitHub.
In the macro implementation we first generate a tree (AST) for each
parameter – which represents either printing a constant, or an
expression. Then we interleave the trees with separators (
", ") for easier reading.
Finally, we have to turn the list of trees into an expression. To do that, we create a
Block.
A block takes a list of statements that should be executed, and an
expression which is a result of the whole block. In our case the result
is of course
().
And now we can happily debug! For example, writing:
debug("After register", user, userCount)
will print, when executed:
AfterRegister, user = User(x, y), userCount = 1029
Summing up
That’s quite a long post, glad somebody made it that far :). Anyway, macros look really interesting, and it’s pretty simple to start writing macros on your own. You can find a simple SBT project plus the code discussed here on GitHub (scala-macro-debug project). And I suppose soon we’ll see an outcrop of macro-leveraging projects. Already there are some, for example Expecty or Macrocosm.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.) | http://java.dzone.com/articles/starting-scala-macros-short | CC-MAIN-2013-20 | refinedweb | 1,302 | 65.73 |
Hello, all!
Hi Boris,
Thanks
for your inquiry. In case you are using an older version of
Aspose.Words, I would suggest you please upgrade to the latest version
(v15.6.0) from here and let us know how it goes on your side. If the problem still remains, please share the following detail for investigation purposes.
- Please attach your input Word document.
- Please share the JDK and IBM Domino version
I will investigate the issue on my side and provide you more information.
Hello, Tahir from Customer Happiness Team! I am Boris from the Mountain of Useful Codes :-).
Here IBM Lotus Full Client (you need mainly Designer).
I do not know if you know much about Lotus Notes (very propietary software). Yon need to install it. After first launch, you will see inital dialog window. You should install Lotus without connection to Domino server and enter any user name you will use.
This is Domino application, which contain Imported Java agent. It starts by button on the view action pane.
This is simple class, contains code with aspose functions using
public class Debug { public static void simpleTest(){ com.aspose.words.Document printDoc; try { printDoc = new Document(“C:/Temp/33.docx”); System.out.println(printDoc.getPageCount()); System.out.println(“That’s all”); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } } this is agent code (need to start notes Agent) public class ActionPrint extends AgentBase { public void NotesMain() { Session session = getSession(); AgentContext agentContext; try { agentContext = session.getAgentContext(); Document doc = agentContext.getDocumentContext(); Debug.simpleTest(); } catch (NotesException e) { e.printStackTrace(); } } }
Result of agent work (java console) you will see, if you will use main menu Item “Tools” - “Show Java Debug Console”
Before you lunch the agent you will change Execution Control List to allow access (File - User security)
Respectfully, Boris
I forget insert the link to Notes Application
Hi Boris,
Thanks
for sharing the detail. I will setup the IBM Domino at my side and will share my finding here for our reference asap.
Hi Boris,
Thanks
for your patience. I have setup IBM Domino at my side. The note application (aat.nsf) have not executed at my end. I have imported (aat.nsf) into Domino designer and have not found the Java code. Could you please share the Note application which can be executed through Domino designer? Thanks for your cooperation. | https://forum.aspose.com/t/ibm-domino-java-aspose-error/44040 | CC-MAIN-2022-40 | refinedweb | 391 | 59.9 |
Last week I wanted to test some new Windows Azure Servicebus functionality. So I started by creating a simple WCF service to host on the cloud. After configuring my service settings in the web.config, I started the service and was confronted with the following error message:
Hostname mynamespace.servicebus.appfabriclabs.com can't support more than 1 level subdomain.
It took me some time to figure out the root cause of the problem. I had created a namespace on. After checking with Fiddler what was going on I realized that although I was using the appfabriclabs environment, the authentication was still passing on to windows.net with the error message above as a consequence.
After creating a service namespace through. the application ran successfully. | https://bartwullems.blogspot.com/2010/12/hostname-mynamespaceservicebusappfabric.html | CC-MAIN-2018-09 | refinedweb | 125 | 52.05 |
Department of Computational Social Science, George Mason University
If you're reading this, chances are you're already excited about Global Data on Events, Location and Tone, better known as GDELT. If you aren't, you should be. Lots has been written about how revolutionary this dataset might be, and I won't try to add to it here.
Instead, let's dive right in! In this tutorial, I'll go through extracting some basic time series from GDELT.
To follow along, go download the data from the GDELT website and unzip it. The data is about 4.6 GB uncompressed in a series of text files, one per year
(First, some code to style the IPython notebook and make it more readable. I've adapted the CSS styling from the excellent Probabilistic Programming and Bayesian Methods for Hackers)_
from IPython.core.display import HTML styles = open("Style.css").read() HTML(styles)
We're going to need only a few libraries to start with: Matplotlib for visualization, datetime for handling date objects, and Pandas for handling, aggregating and reshaping some of the data. Pandas provides great functionality to easily plot time series, so we'll use it for that too. We'll also import defaultdict while we're at it, since it's often useful for data collection.
import datetime as dt from collections import defaultdict import matplotlib.pyplot as plt import pandas matplotlib.rcParams['figure.figsize'] = [8,4] # Set default figure size
# Set this variable to the directory where the GDELT data files are PATH = "GDELT.1979-2012.reduced/"
# Peeking at the data: !head -n 5 GDELT.1979-2012.reduced/2010.reduced.txt
Day Actor1Code Actor2Code EventCode QuadCategory GoldsteinScale Actor1Geo_Lat Actor1Geo_Long Actor2Geo_Lat Actor2Geo_Long ActionGeo_Lat ActionGeo_Long 20100101 AFG AFGCOP 173 4 -5.0 34.9669 69.265 34.9669 69.265 25 45 20100101 AFG AFGCVL 080 1 5.0 31 64 31 64 31 64 20100101 AFG AFGCVL 190 4 -10.0 35.3472 70.1485 35.3472 70.1485 35.3472 70.1485 20100101 AFG AFGGOV 043 2 2.8 31 64 31 64 31 64
It's important to know how big our dataset is. It's also important to know if the data available over time is biased -- does GDELT have more events for recent years than for distant ones? If so, is that because more has happened recently, or because the data collection has gotten better?
The paper introducing GDELT (warning: large PDF) goes over this, but it'll be good practice to replicate some basic diagnostics.
So let's start with a simple count of just how many events -- all events -- the dataset has per month (which is a common typical unit of temporal aggregation). To do that, we'll open each file, figure out which month each event (meaning each row) occured in, and add them up.
monthly_data = defaultdict(int) # We'll use this to store the counts count = 0 # While we're at it, let's count how many records there are, total. for year in range(1979, 2013): #print year # Uncomment this line to see the program's progress. f = open(PATH + str(year) + ".reduced.txt") next(f) # Skip the header row. for raw_row in f: try: row = raw_row.split("\t") # Get the date, which is in YYYYMMDD format: date_str = row[0] year = int(date_str[:4]) month = int(date_str[4:6]) date = dt.datetime(year, month, 1) monthly_data[date] += 1 count += 1 except: pass # Skip error-generating rows for now. print "Total rows processed:", count print "Total months:", len(monthly_data)
Total rows processed: 67927691 Total months: 402
Now we just turn this dictionary into a Pandas series, and plot it. Pandas will automatically recognize that we're dealing with a time series, because it's useful like that.
monthly_events = pandas.Series(monthly_data) monthly_events.plot()
<matplotlib.axes.AxesSubplot at 0x10804f510>
As we might expect, the number of events in the dataset isn't uniform, and goes up rapidly in the later years.
One important and useful feature of GDELT is the QuadCategory classification of each event. Per the documentation, each event has one of the following quad categories:
1. Material Cooperation 2. Verbal Cooperation 3. Verbal Conflict 4. Material Conflict
Let's repeat the analysis above, but now examine material cooperation and conflict. Very (very very) roughly, is the world becoming more cooperative, or more violent?
material_coop = defaultdict(int) material_conf = defaultdict(int) for year in range(1979, 2013): f = open(PATH + str(year) + ".reduced.txt") next(f) # Skip the header row. for raw_row in f: try: row = raw_row.split("\t") # Check the quadcat, and skip if not relevant: if row[4] not in ['1', '4']: continue # Get the date, which is in YYYYMMDD format: date_str = row[0] year = int(date_str[:4]) month = int(date_str[4:6]) date = dt.datetime(year, month, 1) if row[4] == '1': material_coop[date] += 1 elif row[4] == '4': material_conf[date] += 1 except: pass # Skip error-generating rows for now.
# Convert both into time series: monthly_coop = pandas.Series(material_coop) monthly_conf = pandas.Series(material_conf) # Join the time series together into a DataFrame trends = pandas.DataFrame({"Material_Cooperation": monthly_coop, "Material_Conflict": monthly_conf}) trends.plot()
<matplotlib.axes.AxesSubplot at 0x1080a7dd0>
Both seem to have roughly the same shape as the total counts, with material conflict slightly but persistently remaining more likely than material cooperation.
The Israeli-Palestinian conflict gets a lot of media attention, so we would expect it to be well-represented in the dataset. It's generally considered to be fairly important, with effects spilling over far from where it is actually taking place. It is also one of the case studies that Leetaru and Schrodt use to compare GDELT against a similar dataset in their paper.
All GDELT events have a source and a target actor. These are coded down to an impressive level of specificity, often down to whether a political party is a member of the government or the opposition when the event occurs. For a first pass, however, only the highest level of the actors will suffice. These will be ISR for Israel, and all Israeli actors; and either PSE or PAL for all Palestinian actors. We'll grab only those events which involve Israel-coded actors acting on Palestinian-coded actors, or vice versa.
Incidentally: learn from my mistakes, and RTFM. My first pass of this analysis was way off because I didn't read the GDELT documentation closely enough, and thought that the actor prefix for Palestine was PAL. In fact, almost all of the events are coded as PSE, the UN code for the Palestinian Occupied Territories. RTFM.
data = [] for year in range(1979, 2013): f = open(PATH + str(year) + ".reduced.txt") for raw_row in f: row = raw_row.split("\t") actor1 = row[1][:3] actor2 = row[2][:3] both = actor1 + actor2 if "ISR" in both and ("PAL" in both or "PSE" in both): year = int(row[0][:4]) month = int(row[0][4:6]) day = int(row[0][6:]) quad_cat = row[4] data.append([year, month, day, actor1, actor2, quad_cat]) print "Israeli-Palestinian Conflict Records:", len(data)
Israeli-Palestinian Conflict Records: 528698
Next, we can turn this data into a Pandas DataFrame; essentially a big table we can manipulate.
ilpalcon = pandas.DataFrame(data, columns=["Year", "Month", "Day", "Actor1", "Actor2", "QuadCat"]) ilpalcon.head()
Pandas provides some powerful table manipulation tools; I'm partial to pivot tables, possibly due to several years of using Excel heavily for work. Let's pivot the data so that we count the number of events by QuadCat for each month.
pivot = pandas.pivot_table(ilpalcon, values="Day", rows=["Year", "Month"], cols="QuadCat", aggfunc=len) pivot = pivot.fillna(0) # Replace any missing data with zeros pivot = pivot.reset_index() # Make Year and Month regular columns pivot.head()
Now that we have a nice table of monthly event counts, we need to index it by date. It would also be nice to rename the columns to the QuadCat description. To create a date from the Year and Month, we need to create a function that generates a datetime object from them, and apply it to each row.
# date-generating function: get_date = lambda x: dt.datetime(year=int(x[0]), month=int(x[1]), day=1) pivot["date"] = pivot.apply(get_date, axis=1) # Apply row-wise pivot = pivot.set_index("date") # Set the new date column as the index # Now we no longer need the Year and Month columns, so let's drop them: pivot = pivot[["1", "2", "3", "4"]] # Rename the QuadCat columns pivot = pivot.rename(columns={"1": "Material Cooperation", "2": "Verbal Cooperation", "3": "Verbal Conflict", "4": "Material Conflict"})
pivot.plot(figsize=(8,4))
<matplotlib.axes.AxesSubplot at 0x10849e910>
Interestingly, it looks like Verbal Cooperation is the most common form of interaction, even when violence (Material Conflict) spikes. We can also clearly see the peace process of the 90s, where Verbal Cooperation events are significantly greater than all others, and the spike in Material Conflict when the Second Intifada breaks out.
Finally, let's see what a general 'peace index' might look like, measuring the difference in volume between cooperation and conflict events.
pivot["Peace_Index"] = (pivot["Material Cooperation"] + pivot["Verbal Cooperation"] - pivot["Verbal Conflict"] - pivot["Material Conflict"]) pivot["Peace_Index"].plot(figsize=(8,4))
<matplotlib.axes.AxesSubplot at 0x117bb3810>
This is barely scratching the surface of what we can do with the GDELT data. Hopefully it'll help you get started interacting with the data, so that you can do real work with it. Happy analyzing! | http://nbviewer.ipython.org/github/dmasad/GDELT_Intro/blob/master/Getting_Started_with_GDELT.ipynb | CC-MAIN-2015-06 | refinedweb | 1,571 | 56.15 |
The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.
Let's imagine you have a standard application deployed in IIS (here, the "MyApp" application). Simply browsing the application will show you the default page content, and you can also access any page of the website.
Now let's imagine you want - for maintenance reasons for example - bring this application offline.
If you stop IIS or your application pool, all your customers will receive an HTTP error, typically a 500 error. Let's imagine you want to have real "maintenance page" whatever the request they will do.
Of course you can change the "default" document in IIS, but if they ask for a specific page, this won't work.
IIS has this feature fully integrated. Simply add a page named "app_offline.htm" in the virtual directory and let's try now several requests, asking either for the virtual directory itself or for any page in the application.
This is pretty cool no ?
Just note that this will work if the client ask for aspx pages. Any other format will be served normally.
Scott Guthrie has also noted that you maintenance page should have a minimal weight otherwise, you would encounter some trouble. Check his blog entry here.
Working another way for maintenance sessions ? Please leave comments !
Hello,
a few time ago we were needing to update a website IIS' parameters to add some mime types.
In order to do that, we must use two COM components that you can find in %windir%\system32:
We won't deal here how to add some mime types to IIS by code but how to create wrappers around these COM DLLs to be able to use them in .NET.
Of course, you all know that we can just "Add a new reference" to our project to let Visual Studio create these wrappers for us.
Article finished.
Not completely. Indeed if you do that and that you compile, you will see bunch of warnings (40 in fact) like for example :
The type library importer could not convert the signature for the member 'ADS_OCTET_STRING.lpValue'.
The type library importer could not convert the signature for the member '__MIDL___MIDL_itf_ads_0000_0002.lpValue'.
The type library importer could not convert the signature for the member 'ADS_NT_SECURITY_DESCRIPTOR.lpValue'.
..........
If you are like me, no warning is acceptable in a project and so we have to do it differently. Of course in this case, there is no way for us to correct these warnings, but maybe we can hide them by creating the COM wrapper ourselves
To achieve this, we will use "tlbimp.exe" which is located in "%programfiles%/Microsoft Visual Studio 8/SDK/v2.0/bin" or in the corresponding folder depending of which version of Visual Studio you use.
To simplify our command lines, we'll update the %path% environment variable to include the tlbimp path, and we'll run a command prompt into "%windir%\system32".
Step 1 : Generate the ActiveDs.dll
As this DLL is referenced by adsiis.dll, we'll start by this one. Note that we want to specify that we are not interested by any warning
tlbimp activeds.tlb /out:c:\temp\Interop.ActiveDs.dll /silent
Note that this command line use the type library file (activeds.tlb) to generate the DLL. We'll generate it in the temp directory and we'll keep the name that is normally generated by Visual Studio
Step 2 : Generate the AdsIis.dll
In this case, we'll specify that it references "Interop.ActiveDs.dll" and we'll also specify the main namespace of the DLL. We have chosen the namespace that is generated by Visual Studio.
tlbimp adsiis.dll /out:c:\temp\Interop.IISOle.dll
/reference:c:\temp\Interop.ActiveDs.dll /namespace:IISOle
Step 3 : Use the DLLs
Now you have two valid DLLs that can be used by any .NET project. And due to the "/silent" flag used in Step 1, no more warning is shown in Visual Studio.
Question:
Experts, do you know another way to achieve to the same result ? But maybe without using a command line ?
A few days ago, we have made an XP meeting in the company to let all people in the team (developper, analyst, tester, ...) express their feeling, what was working good or less good, what we should impoove, ...
And one developper has said that. For him, in XP, "each developper must have its own bottle of water".
I will try here to reformulate (and translate) what he has said.
"For me, in XP and in pair-programming, it's important that each developper has its own bottle of water.
A few years ago, I was working with handicaped people and we have done an exercise one day, asking to valid people to seat on the wheel-chair and so be at the place of handicaped people. One assistant/helper was doing what he has always done with handicaped people, pushing them in the house. What was happening ? At the first turn, the assistant turned in one direction as he has always done, and the "fake handicaped" was so surprised as he was expecting the turn. Later on, arrival on stairs and starting to climb them. The guy was simply astonished and afraid.
Let's stop the exercise, and do it again, differently, asking to the helper to always speak and explain what he was doing before doing it.
Now we will turn to the right. The "handicaped" guy was of course not surprised. Let's now climb the stairs. Ok no problem.
What's the point of that ? When you speak and explain what you do, or what you want to do, the other guy is never lost and can follow you easily without any surprise and without any fear.
And the parallel with Pair-programming ? Of course we won't compare one of the two developpers with being handicaped, but one of the two is not typing and thus doesn't know the minds of the writer and can be surprised (and easily lost) by what is colleague is now doing.
To avoid this kind of situation, it's important to have each people in the programmer pair speaking a lot of what they are currently doing and their idea, what is their goal, how to achieve it. It will lead to a good cohesion between each developper and the team will be more concentated and efficient.
Of course, as we must all speak a lot, we'll get thirsty and so, we each need our own bottle of water."
I find this example quite interesting because it represent the feeling that the copilot is having sometimes, not knowing where the pilot wants to go. So let's all speak more ! (in a productive way!)
I have been asked many times why I use this provocative subtitle.
Pretentious? Far from that.
Time to answer :-)
I often compare .NET to a big encyclopedia of thousands of volumes, among which we know only 1, 5, 10 or 20 volumes depending of people. The biggest problem is that this encyclopedia grows by several brand new volumes each year. Thus - if we have stopped learning, as many developpers tends to do after university - our knowledge ratio tends to decrease with the time.
And in a way, this knowledge ratio is our value on the work market. What is the commercial value of a developper with 2 years experience in .NET 2.0 and not being able to explain the very basis of generics ?
I believe that currently - or in the (I hope near) future - we are able to do what we want to do in .NET. Some parts are very easy, some completely crazy, to make you feel becoming insane, but still feasible for some experts, having a specific domain expertise.
And when we can't do something, what we miss to achieve this work - easy or difficult - is knowledge, Simply knowledge. Of course frameworks or tools may help us to achieve more easily the work, but with extra knowledge, we could do it, by ourselves.
And that's exacty the point. What is feasible for a few experts, is not for other developpers.
Our domain is huge and no-one on earth will know all of it. And so in a way, because all of us is missing knowledge in the "technically feasible area" we are all incompetent. Some more than others, but we are all incompetent.
But the truth is, some experts are working, and working hard, to become less and less incompetents. By less incompetent, I mean to reduce the field of their unkown knowledge.
And these same experts also work, and work hard, to help other developpers following them.
Difficult race of learning quicker (and deep enough) than the new functionnalities arrival.
This sentence I use is far from being pretentious. All the contrary.
I want to let remember people, and more specifically to let remember me, that I still miss lots of knowledge.
I want to have this idea firmly rooted in my mind "Work and Work hard. Learn and Learn deep! There are so many things you could still learn!" | http://www.pedautreppe.com/2007/08/default.aspx | CC-MAIN-2017-43 | refinedweb | 1,532 | 73.07 |
.
[Advertisement].
I am facing a problem when I tried to host the asp.net application in windows server 2003.
VB cum ASp.net front end and Ms Access as back end
It is showing the error : "Child nodes are not allowed"
for the <namespace> tag
But the same web.config file didnt show such an error when I tried to host in win XP OS.
Kindly explain me whats the problem and what should be done to avoid to host the web application without trouble.
Thanks
regards
Komal
Sounds like the IIS setting for the asp.net version is wrong. Check that in the ASP.NET tab of the IIS admin
Pingback from web.config ayr??t?r?c? hatas? iletisi... - Ceviz | http://codebetter.com/blogs/peter.van.ooijen/archive/2006/06/23/146738.aspx | crawl-002 | refinedweb | 122 | 76.93 |
NAME
ntp_gettime - NTP user application interface
SYNOPSIS
#include <sys/timex.h> int ntp_gettime(struct ntptimeval *ntv);
DESCRIPTION
The time returned by ntp_gettime() is in a: time Current time (read-only). maxerror Maximum error in microseconds (read-only). esterror Estimated error in microseconds (read-only). tai Offset in seconds between the TAI and UTC time scales. This offset is published twice a year and is an integral number of seconds between TAI (which does not have leap seconds) and UTC (which does). ntpd(8) or some other agent maintains this value. A value of 0 means unknown. As of the date of the manual page, the offset is 32 seconds. time_state Current time status.
RETURN VALUES
The ntp_gettime() function returns the value 0 if successful; otherwise the value -1 is returned and the global variable errno is set to indicate the error. Possible states of the clock are: TIME_OK Everything okay, no leap second warning. TIME_INS Positive leap second warning. At the end of the day, an additional second will be inserted after 23:59:59. TIME_DEL Negative leap second warning. At the end of the day, 23:59:59 is skipped. TIME_OOP Leap second in progress. TIME_WAIT Leap second has occurred. TIME_ERROR Clock not synchronized.
SEE ALSO
ntp_adjtime(2), ntpd(8)
AUTHORS
This manual page was written by Tom Rhodes 〈trhodes@FreeBSD.org〉. | http://manpages.ubuntu.com/manpages/hardy/man2/ntp_gettime.2.html | CC-MAIN-2015-40 | refinedweb | 223 | 60.92 |
What is the substring() method of the String class. What does it do can anyone explain with an example.
The substring() method of the String class has two variants and returns a new string which is a substring of the current string. The substring begins with the character at the specified index and extends to the end of this string or up to endIndex – 1 if the second argument is given.
import java.io.*; public class Test { public static void main(String args[]) { String Str = new String("Welcome to Tutorialspoint.com"); System.out.print("Return Value :" ); System.out.println(Str.substring(10) ); } }
Return Value : Tutorialspoint.com | https://www.tutorialspoint.com/How-to-use-Java-substring-Method | CC-MAIN-2018-30 | refinedweb | 106 | 57.98 |
Important: Please read the Qt Code of Conduct -
QML alias not working
Anyone any ideas why this code is not working? I have a loader in main which is happily loading files the c++ spits out to it but a slightly different mechanics is required for the username/password and I thought an alias would be simpler than a signal/slot. The loader (Main.qml) has an alias which I am trying to use to go from the username to the password (Loggon_username.qml)
Main.qml
@
import QtQuick 2.0
Rectangle
{
id: window
property alias mainLoader: loader
Component.onCompleted: { loader.forceActiveFocus() } Rectangle { id: promptsContainer width: parent.width height: prompts.height * 1.25 color: "#2A51A3" anchors.top: parent.top Text { id: prompts //text showing what voice is saying anchors.centerIn: parent color: "orange" font.pointSize: 20 //coreMenu.promptsFontPointSize width: parent.width * 0.9 wrapMode: Text.WordWrap clip: true } } Loader { id: loader height: window.height - promptsContainer.height anchors.left: parent.left anchors.right: parent.right anchors.bottom: parent.bottom visible: source != "" source: "Loggon_password.qml" onSourceChanged: console.log(loader.source); } Keys.onPressed: { QMLManager.handleKey(event.key) loader.source = QMLManager.getLoaderSource() console.log(loader.source) }
}
@
Loggon_username.qml
@import QtQuick 2.0
Rectangle
{
anchors.fill: parent;
Component.onCompleted: { // promptsBar.text = qsTr("Please enter your MTM username"); } TextEntry { title: QMLManager.getTitle(); onTextEntered: { QMLManager.setUsername(text); Main.mainLoader.setSource("Loggon_password.qml") // Main.mainLoader.source = "Help.qml" } }
}
@
- sierdzio Moderators last edited by
"Main" is outside of scope in your Loggon_username. It won't work.
What do you mean by out of scope? The doc says this should work if the qml files are in same directory :-)
- sierdzio Moderators last edited by
"Main" would need to be defined somewhere in Loggon_username.qml for it to be visible inside. Or maybe I misunderstood the situation.
- if I import "Main.qml" the file will not load at all (which means it thinks there is a syntax error I think)
- if main.qml file does not start with a capital when you type main.mainLoader it does not recognise it as a property
- if Main.qml starts with 'M', when you are in the username file typing "Main." the text changes colour and you get a list of properties including the mainLoader
????
I have tried this in a small test app and I get the same results | https://forum.qt.io/topic/30697/qml-alias-not-working | CC-MAIN-2020-40 | refinedweb | 385 | 53.68 |
SMIME_write_ASN1.3ossl - Man Page
convert structure to S/MIME format
Synopsis
#include <openssl/asn1.h> int SMIME_write_ASN1_ex(BIO *out, ASN1_VALUE *val, BIO *data, int flags, int ctype_nid, int econt_nid, STACK_OF(X509_ALGOR) *mdalgs, const ASN1_ITEM *it, OSSL_LIB_CTX *libctx, const char *propq); int SMIME_write_ASN1(BIO *out, ASN1_VALUE *val, BIO *data, int flags, int ctype_nid, int econt_nid, STACK_OF(X509_ALGOR) *mdalgs, const ASN1_ITEM *it);
Description
SMIME_write_ASN1_ex() adds the appropriate MIME headers to an object structure to produce an S/MIME message.
out is the BIO to write the data to. value is the appropriate ASN1_VALUE structure (either CMS_ContentInfo or PKCS7). If streaming is enabled then the content must be supplied via data. flags is an optional set of flags. ctype_nid is the NID of the content type, econt_nid is the NID of the embedded content type and mdalgs is a list of signed data digestAlgorithms. Valid values that can be used by the ASN.1 structure it are ASN1_ITEM_rptr(PKCS7) or ASN1_ITEM_rptr(CMS_ContentInfo). The library context libctx and the property query propq are used when retrieving algorithms from providers.
Notes
The higher level functions SMIME_write_CMS(3) and SMIME_write_PKCS7(3) should be used instead of SMIME_write_ASN1().
The following flags can be passed in the flags parameter.
If CMS_DETACHED is set then cleartext signing will be used, this option only makes sense for SignedData where CMS_DETACHED is also set when the sign() method or PKCS7 creation function.
If cleartext signing is being used and CMS_STREAM not set then the data must be read twice: once to compute the signature in sign method and once to output the S/MIME message.
If streaming is performed the content is output in BER format using indefinite length constructed encoding except in the case of signed data with detached content where the content is absent and DER format is used.
Return Values
SMIME_write_ASN1_ex() and SMIME_write_ASN1() return 1 for success or 0 for failure.
See Also
ERR_get_error(3), SMIME_write_CMS(3), SMIME_write_PKCS7(3)
Licensed under the Apache License 2.0 (the “License”). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at <
Referenced By
migration_guide.7ossl(7), SMIME_read_ASN1.3ossl(3).
The man page SMIME_write_ASN1_ex.3ossl(3) is an alias of SMIME_write_ASN1.3ossl(3). | https://www.mankier.com/3/SMIME_write_ASN1.3ossl | CC-MAIN-2022-21 | refinedweb | 378 | 55.24 |
Want more? Here are some additional resources on this topic:
The namespace alias qualifier operator.
The namespace alias qualifier (::) is used to look up identifiers. It is always placed between two identifiers, as in this example:
global::System.Console.WriteLine("Hello World");
The namespace alias qualifier can be global. This invokes a lookup in the global namespace, rather than an aliased namespace.
For an example of using the :: operator, see the following section:
How to: Use the Namespace Alias Qualifier (C# Programming Guide)
For more information, see the following sections in the C# Language Specification:
20.9.5 Simple names
25.3 Namespace alias qualifiers | http://msdn.microsoft.com/en-us/library/htccxtad(VS.80).aspx | crawl-002 | refinedweb | 105 | 50.53 |
I'm looking for a way to disable the mouseover tooltips for images and hyperlinks in Internet Explorer.
I've dug through the settings of IE 8 with no luck. My research suggests that only Opera has ever had this option.
I checked my copy of IE3 and it doesn't have tooltips for links, though it does for images. It too has no setting to get rid of the tooltips, so it seems like this has never been an option in IE.
I understand that this is a feature of HTML and that it's up to the web page designer to determine if the page has these tooltips. I just want to get rid of all of them that I come across. There are good reasons to disable these. For example, this is an issue for some people with epileptic seizures.
How can I remove tooltips for all pages while using Internet Explorer?
As remarked by @Oliver Salzburg, the only solution is to modify the html document in order to
get rid of these pesky tooltips, because IE does insist on them when they are there. Your tool would be Trixie, which is to IE what Greasemonkey is to Firefox.
You could base your script on the one found in Hack 70. Make Image alt Text Visible.
I am not a user of Greasemonkey/Trixie, but I would imagine that you could start with something like this totally untested script for img tags :
// ==UserScript==
// @name Alt Tooltips
// @namespace
// @description Erase Alt text from images
// @include *
// ==/UserScript==
var res = document.evaluate("//img", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null);
var i, el;
for (i=0; el=res.snapshotItem(i); i++) {
el.alt='';
}
With a similar script for hyperlinks.
el.title='';
Most of these options are configurable in the registry (this is in my notes file for explorer, not on a windows box currently, but it should give you an idea where to look for IE)
for explorer, they are:
[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced]
"EnableBalloonTips"=dword:00000000
"FolderContentsInfoTip"=dword:00000000
[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\tips]
"Show"=dword:00000000
"StartButtonBalloonTip"=dword:00000000
"ShowInfoTip"=dword:00000000
[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer]
"NoSMBalloonTip"=dword:00000000
alt
title
asked
4 years ago
viewed
4919 times
active | http://superuser.com/questions/397315/how-to-remove-tooltips-in-internet-explorer | CC-MAIN-2016-30 | refinedweb | 377 | 62.98 |
Does anybody use dmc for c++? I have been using it for several years, ever since I started programming with regular C and it has worked great. DMC claims to support c++ but I just started learning c++ today and trying some things out and I am running into some problems.
gives an error.gives an error.Code:
#include <iostream>
This works. It's kind of annoying, but I'm willing to live with that.This works. It's kind of annoying, but I'm willing to live with that.Code:
#include <iostream.h>
Error: undefined identifier 'std'Error: undefined identifier 'std'Code:
using namespace std;
using the -A (conformity to standard C/C++) switch fixes this for some reason, I don't know why.
Error: 'string' is not a member of namespace 'std'Error: 'string' is not a member of namespace 'std'Code:
std::string my_string;
If I am doing something wrong (I have installed stlport and I use the -cpp switch although it automatically compiles in cpp mode if the filename extension is .cpp or .c++), or if there is an easy way to fix this I would like to know. Otherwise can anybody recommend a good command line C++ compiler for windows?
Thanks
-nb | http://cboard.cprogramming.com/cplusplus-programming/96325-dmc-digital-mars-compiler-printable-thread.html | CC-MAIN-2014-35 | refinedweb | 207 | 73.47 |
29 November 2011 06:35 [Source: ICIS news]
SINGAPORE (ICIS)--Kuwait Petroleum Corp (KPC) has sold a spot 50,000 tonne cargo of full-range naphtha for loading on 7-8 December, traders said on Tuesday.
The cargo was awarded to Trafigura at a premium of $14.00/tonne (€10.50/tonne) to ?xml:namespace>
KPC had agreed to sell term naphtha supplies for the period of December 2011-November 2012, with the term full-range naphtha priced at a premium of $18.50/tonne to
The term light naphtha was priced at Middle East FOB plus $20.00/tonne.
($1 = €0.75) | http://www.icis.com/Articles/2011/11/29/9512223/kuwait-sells-50000-tonnes-naphtha-for-7-8-december-loading.html | CC-MAIN-2013-48 | refinedweb | 103 | 66.57 |
Here it is the Day 1, inception day. It is beautiful!
I will just blurt out my learnings and notes of this day, an improvement suggestion to whosoever might read it next. would most probably be just me 😄
As promised on Day 1 we were to begin with MDN's Front end Web dev guide
We took off to stick to the very basics to keep things enjoyable and as Gary Vee puts it "fall in love with the process".
Hence we started with Getting started with the web I was kind of feeling ashamed to even start it, I mean years spent browsing web and what not, and I was about to read "Getting started with the web" 😏 I didn't know if it made me laugh or cry. 😅
This section listed very basic Webdev stuff, and I was pretty glad that it was made with a complete beginner mindset, its for a kid who just got a new laptop. And I am glad it is structured this way.
It lists down an amazing bunch of tools used by professionals now, with links for most of them. ( I hope new guys don't get overwhelmed by it) It's an exhaustive set, and to be honest many were new to my eyes as well. So if you are a newbie reading that list, its just there to scare the weaklings. 😄
The only tools one needs to get started are Text Editor, and a Web Browser. (I'd pick VS code and Google Chrome)
Interesting Tip by MDN peeps : " You usually don't need to worry about making your web projects compatible with it, as very few people still use it — certainly don't worry too much about it while you are learning. You might sometimes run into a project that requires support for it."
It's true unless you are working on a project where your end users are Librarians, or a government project one would most likely not care to support the Internet Explorer, but its a good thing to keep in mind which feature of the web has limited support and compatibility. Web is an evergrowing space one has to be mindful of many things.
How do you set up a local testing server?
I like the depth that they covered here tbh I would have simply recommended a VS code plugin like: VSCode live Server
but the real gold in this article were its pre-requisites
Found this video in there it was a good quality watch.How the internet Works in 5 minutes: A 5 minute video to understand the very basics of Internet by Aaron Titus.
And this Article on setting project goals literally walks you through the mindset and the thought process one should have while building one's website.
It has this real lit 🔥 line in it
How can a website help me reach my goals? By answering that, you'll find the best way to reach your goals and save yourself from wasted effort.
It's a basic thing many engineers and developers forget, what's the end goal, what and why are you building this website. And without that Why the how gets lost pretty quickly.
What will your website look like?
I loved the smallest and benign of details covered in here.
Dealing with files
This part clears out an early confusion that I had while starting with web dev, Where should I be keeping my files and how should I be structuring my project. Plus this artcile/section does a great job in just getting a newbie to familiarize itself with the foreign language
HTML😄
HTML basics
This section barely scratches the HTML and stands true to its name HTML basics and introduces us to the commonly used tags.
Next up for Day-2 from MDN's Frontend Guide: CSS Basics
The relief one gets after coming so far is phenomenal
After a theory run, it was time to get real with FCC's JS DS and Algo course
And since it started of with real basics and I was able to complete 25% of the Basics part of it I'll just mark a few notes for the future me, to remember.
- It was quite a Fun fact to me
The remainder operator is sometimes incorrectly referred to as the modulus operator. It is very similar to modulus, but does not work properly with negative numbers.
- Remember that everything to the right of the equals sign is evaluated first
- I like how FCC peeps take a jab at PHP 😏
Unlike some other programming languages, single and double quotes work the same in JavaScript.
- The backslash \ should not be confused with the forward-slash /. They do not do the same thing.
- A good list of escape characters
Code Output \' single quote \" double quote \\ backslash \n newline \r carriage return (A reminiscent of typewriter days, is a control character or mechanism used to reset a device's position to the beginning of a line of text. its the CR in `CRLF`) \t tab \b word boundary (Word's beginning and end e.g *word* the astericks here represent the word boundary not sure when it'd be used though) \f form feed (Page Seprator, indicating next page)
- Another fun fact "My name is " + mName + ". And I am awesome!" is "Mad Libs" style. I would have called it the Fill in the Blanks style. :laugh:
My Comments and conclusion:-
All in all, it was a good start but I almost derailed it by not starting on pre-decided time and by procrastinating on it till I almost ran out of time.
And for it I have a little Atomic Habit hack
" I will continue with the 100daysofcode challenge, at 2:00pm in the morning right after my lunch every day without fail" :fingers_crossed:
Discussion (6)
I can't take any programming article seriously that doesn't start counting at
0. 😂 😂 😂
Damn! Rookie mistake 😂 😂
Keep it up🎉
Thanks a lot 😄 Love these nudges ❤️ keeps me going 🙌
Great start off! Keep up with it 😊👏🏿👏🏿
Thanks a lot 🙌 😄 | https://practicaldev-herokuapp-com.global.ssl.fastly.net/pracoon/day-1-100daysofcode-32hn | CC-MAIN-2021-21 | refinedweb | 1,015 | 73.51 |
Objective
This article will explain how to use LINQ to XML to read data from a XML file and bind that to SILVERLIGHT 3.0 Data Grid. XML file will be read in a WCF service and WCF service will return a List to be bind in a Grid.
Step 1
Create a SILVERLIGHT application. Select hosting in WEB Application project option.
Step 2: Add XML file in Web project.
If you have any existing XML file add that by selecting Add Existing Item option by right clicking on WEB (SilverLightApplication1.Web) project else select add new item option and select XML file from DATA tab. Copy paste the below XML file in your newly created XML file. Give any name of your choice to XML file. Name I am giving is Books.Xml.
Your XML file in Web Application project must look like.
Books.XML
Step 2: Creating WCF service
Right click on Web Project and add WCF service from Web tab. Give any name of your choice, I am giving name here MyService.
Creating Data Contract
This class will get serialized at client side.
Creating Service Contract
IMyService.cs
Creating Service Implementation
MyService.svc.cs
Few points
- Using namespace System,.XML.Linq to use LINQ to XML features.
- Load method of XDocument class is being used to load XML document.
- Searching for all the descendents in XML document for the element Book. And retrieving all the child elements value.
- Service is returning List of BooksDTO class.
Compile the web project and after successfully compilation right click on service and view in browser.
Step 3: Consuming in SILVERLIGHT client
- Add Service Reference.
- While adding service reference, select advanced tab and change return type from Array to List.
- Add Data Grid on XAML. Give any name. I am giving name here MyGrid.
- On page load simply bind the result returning from service to Data Grid.
- Make sure your startup project is SilverLightApplication1.Web and startup page is SilverLightApplicatio1Testpage.aspx . Else you might yield with Cross domain problem.
Output
Conclusion
I have explained how to return data from XML file using LINQ to XML and bind to SILVERLIGHT data grid. In next article I will show other CRUD operations. Thanks for reading.
superb explain about linq to xml concept with silverlight and wcf
what is “BooksDTO”, it shows error. can you explain please.
Pingback: Monthly Report February 2010: Total Posts 12 « debug mode…… | http://debugmode.net/2010/02/23/linq-to-xml-in-silverlight-3-0/ | CC-MAIN-2015-18 | refinedweb | 402 | 69.79 |
Details
- Type:
Wish
- Status:
Open
- Priority:
Minor
- Resolution: Unresolved
- Affects Version/s: 4.0.7
- Fix Version/s: None
- Labels:None
- Number of attachments :
Description
I'm sure there is a good reason for this, but I couldn't find any documentation telling me why.
I currently want to process an XML document and leave it exactly as it comes in. I.E. no resolving of entities.
XMLInputFactory.IS_REPLACING_ENTITY_REFERENCES or XMLInputFactory.SUPPORT_DTD will do this for entities within elements, but there seems to be no way of leaving entities in attributes unresolved.
I tried using SUPPORT_DTD == false, and then using a custom "com.ctc.wstx.undeclaredEntityResolver" that just returns the entity as is:
public class UndeclaredEntityResolver implements javax.xml.stream.XMLResolver { @Override public Object resolveEntity(String publicID, String systemID, String baseURI, String namespace) throws XMLStreamException { String entity = "&" + namespace + ";"; return new ByteArrayInputStream(entity.getBytes()); } }
but this will just result in a recursive entity problem.
Am I missing something?
Is there any nice way I fan accomplish this?
Activity
Right, Stax API has no way to do this. And in general, there is absolutely no way to do this, if attribute value must be returned as String (like Andreas pointed out). Problem is, it is not possible to know whether '&' was unexpanded entity, or expanded from something like '∓'.
The only possibility would be to expose raw underlying input as is. Woodstox does not try to do this, since there is no efficient way to do it; but it does provide accurate input pointers that calling application may be able to find actual physical String (if it has copy of input data).
So I don't think there is a nice way. If you really want to leave document as is, just make a copy?
Also: these settings do not affect "standard" entities (amp, lt, gt, apos).
They are considered to be effectively same as character entities ( and such), not real general entities. As such, they will always be replaced, even in regular element content. This was based on interoperability reasons, as well as convenience.
I don't know which entities you are dealing with here, but realized you might be thinking of the default ones.
Thanks for your comments both.
If you really want to leave document as is, just make a copy?
Not applicable to my situation unfortunately, if I'm understanding you correctly.
I'm us Woodstox with JiBX to unmarshall/marshall XML, and the service it sits within has to be able to return parts of documents exactly as the came into the system, i.e. all entities unexpanded.
I've got around this at the moment by replacing all occurrences of "&" in the xml string with some defined place-holder, before unmarshalling, and then replacing the place-holder with "&" post marshall on the way out. Not nice, but it works.
Thanks again.
Ben
Ok. I guess fundamental question is, why worry about physical serialization, but there may be reasons (... legacy systems). Any system that assumes unexpanded entities is somewhat flawed, conceptually, since XML makes no guarantees of what entities are used if any.
But I assume you know that and it's other systems (and their designers) that don't.
Additional escaping may indeed be the way to go. But if you just need to pass sub-section (element with its contents), you could still consider using location offsets if you have access to them (this depends on how input is given) and underlying content. Woodstox has a way to pass "raw" content to output. But given that you are using data binder above Woodstox, maybe it just won't work.
I guess the reason for this is that the StAX API doesn't foresee a way to report entity references in attributes. Attribute values are always reported as strings. This is in contrast with entities within elements: they can be reported as separate ENTITY_REFERENCE events. | http://jira.codehaus.org/browse/WSTX-230?focusedCommentId=213233&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2014-42 | refinedweb | 648 | 56.25 |
How Domain and Forest Trusts Work
Applies To: Windows Server 2008 R2, Windows Server 2008, Windows Server 2003 with SP2, Windows Server 2003, Windows Server 2003 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server 2003 with SP.
Trust Architecture
).
Tools..
TDO Passwords.
Trust Path Limits.
Trust Search Limits.
Trust Flow).
One-Way and Two-Way Trusts:
Windows Server 2003 domains in the same forest.
Windows Server 2003 domains in a different forest.
Windows NT 4.0 domains.
Kerberos V5 realms.
Transitive and Nontransitive Trusts
.jpeg)
In addition to the default transitive trusts established in a Windows Server 2003 forest, by using the New Trust Wizard you can manually create the following transitive trusts.
Shortcut trust. A transitive trust between domains in the same domain tree or forest that is used to shorten the trust path in a large and complex domain tree or forest.
Forest trust. A transitive trust between one forest root domain and another forest root domain.
Realm trust. A transitive trust between an Active Directory domain and a Kerberos V5 realm.:
A Windows Server 2003 domain and a Windows NT domain
A Windows Server 2003 domain in one forest and a domain in another forest (when not joined by a forest trust)
By using the New Trust Wizard, you can manually create the following nontransitive trusts:
External trust. A nontransitive trust created between a Windows Server 2003 domain and a Windows NT, Windows 2000, a Kerberos V5 realm.
Trust Types.
Automatic Trusts
By default, two-way transitive trusts are automatically created when a new domain is added to a domain tree or forest root domain by using the Active Directory Installation Wizard. The two default trust types are parent-child trusts and tree-root trusts.
Parent-child trust
A parent-child trust relationship is established whenever a new domain is created in a tree. The Active Directory installation process automatically creates a trust relationship between the new domain and the domain that immediately precedes it in the namespace hierarchy (for example,:
It can be established only between the roots of two trees in the same forest.
It must be transitive and two-way.
Manual Trusts
.
Realm Trusts:..
Trust Processes and Interactions
Many inter-domain and inter-forest transactions depend on domain or forest trusts in order to complete various tasks. This section describes the processes and interactions that occur as resources are accessed across trusts and authentication referrals are evaluated.
Overview of Authentication Referral Processing.
Kerberos V5 Referral Processing:.
Kerberos-Based Processing of Authentication Requestsos Key Distribution Center (KDC) on a domain controller in its domain (ChildDC1) and requests a service ticket for the FileServer1 SPN.
ChildDC1 does not find the SPN in its domain database and queries the global catalog to see if any domains in the tailspintoys.com forest contain this SPN. Because a global catalog is limited to its own forest, the SPN is not found. The global catalog then checks its database for information about any forest trusts that are established with its forest, and, if found, it compares the name suffixes listed in the forest trust trusted domain object (TDO) to the suffix of the target SPN to find a match. Once a match is found, the global catalog provides a routing hint back to ChildDC1..com forest for a service ticket to the requested service.
ForestRootDC2 contacts its global catalog to find the SPN, and the global catalog finds a match for the SPN and sends it back to ForestRootDC2.
ForestRootDC2 then sends the referral to.. | https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc773178(v=ws.10)?redirectedfrom=MSDN | CC-MAIN-2022-40 | refinedweb | 587 | 53.31 |
I am trying to resort a few lines in "Main.txt" to another file, "Main1.txt".
The lines look like this in "Main.txt":
12/03/1999,12/04/1999,1535
12/03/1999,12/04/1999,1537
12/03/1999,12/04/1999,1538
When using this code my output look like this:
It seems that string/int Two/Three did work but not One
I beleive it has to do with getline in anyway but cant figure it out really ?
12/04/1999,1535,2,12/03/1999
12/04/1999,1537,2,03/1999
12/04/1999,1538,2,03/1999
Code:#include "stdafx.h" #include <iostream> #include <fstream> #include <sstream> #include <string> #include <vector> #include <cmath> #include <algorithm> #include <limits> #include <ios> #include <cstdio> using namespace std; int main() { char Comma; std::string One = ""; std::string Two = ""; int Three = 0; ofstream NewOutFile; ifstream NewFile ("Main.txt"); NewOutFile.open ("Main1.txt"); while ( getline(NewFile, One, ',') ) { NewFile >> Two; NewFile >> Comma; NewFile >> Three; NewFile.get(); // read in trailing newline character NewOutFile << Two << ',' << Three << ',' << One << "\n"; } return 0; } | https://cboard.cprogramming.com/cplusplus-programming/99259-txtfile-another.html | CC-MAIN-2017-13 | refinedweb | 178 | 72.26 |
Starscream alternatives and similar libraries
Based on the "Socket" category.
Alternatively, view Starscream alternatives based on common mentions on social networks and blogs.
Socket.IO9.5 5.2 L2 Starscream VS Socket.IOSocket.IO client for iOS/OS X.
SwiftSocket8.3 0.0 L3 Starscream VS SwiftSocketsimple TCP socket library.
SwiftWebSocket8.1 0.0 L1 Starscream VS SwiftWebSocketA high performance WebSocket client library for swift.
BlueSocket7.8 2.0 L1 Starscream VS BlueSocketIBM's low level socket framework.
Socks5.9 0.0 L2 Starscream VS SocksPure-Swift Sockets: TCP, UDP; Client, Server; Linux, OS X.
BlueSSLService3.7 2.7 Starscream VS BlueSSLServiceSSL/TLS add-in for IBM's low level socket framework.
SocketIO-Kit3.1 0.0 L4 Starscream VS SocketIO-KitSocket.io iOS and OSX Client.
WebSocket2.8 0.0 L3 Starscream VS WebSocketWebSockets server for Swift 2.2 on Linux.
SwiftDSSocket1.9 0.0 Starscream VS SwiftDSSocketAsynchronous socket framework built atop DispatchSource.
RxWebSocket1.8 0.0 L4 Starscream VS RxWebSocketReactive WebSockets.
DNWebSocket0.7 0.0 Starscream Starscream or a related project?
README
Starscream is a conforming WebSocket (RFC 6455) library in Swift.
Features
- Conforms to all of the base Autobahn test suite.
- Nonblocking. Everything happens in the background, thanks to GCD.
- TLS/WSS support.
- Compression Extensions support (RFC 7692)
Import the framework
First thing is to import the framework. See the Installation instructions on how to add the framework to your project.
import Starscream
Connect
write a binary frame
The writeData method gives you a simple way to send
Data (binary) data to the server.
socket.write(data: data) //write some Data over the socket!
write a string frame
The writeString method is the same as writeData, but sends text/string.
socket.write(string: "Hi Server!") //example on how to write text over the socket!
write a ping frame
The writePing method is the same as write, but sends a ping control frame.
socket.write(ping: Data()) //example on how to write a ping control frame over the socket!
write process you can turn off the automatic
ping response by disabling
respondToPingWithPong.
socket.respondToPingWithPong = false //Do not automaticaly respond to incoming pings with pongs.
In most cases you will not need to do this.
disconnect
The disconnect method does what you would expect and closes the socket.
socket.disconnect()
The disconnect method can also send a custom close code if desired.
socket.disconnect(closeCode: CloseCode.normal.rawValue)
Custom)
Compression should be disabled if your application is transmitting already-compressed, random, or other uncompressable data.
Custom Queue
A custom queue can be specified when delegate methods are called. By default
DispatchQueue.main is used, thus making all delegate methods calls run on the main thread. It is important to note that all WebSocket processing is done on a background thread, only the delegate method calls are changed when modifying the queue. The actual processing is always on a background thread and will not pause your app.
socket = WebSocket(url: URL(string: "ws://localhost:8080/")!, protocols: ["chat","superchat"]) //create a custom queue socket.callbackQueue = DispatchQueue(label: "com.vluxe.starscream.myapp")
Example Project
Check out the SimpleTest project in the examples directory to see how to setup a simple connection to a WebSocket server.
Requirements
Starscream works with iOS 8/10.10 or above for CocoaPods/framework support. To use Starscream with a project targeting iOS 7, you must include all Swift files directly in your project.
Installation
CocoaPods
To use Starscream in your project add the following 'Podfile' to your project
source '' platform :ios, '9.0' use_frameworks! pod 'Starscream', '~> 4.0.0'
Then run:
pod install
Car" >= 4.0.0
Accio
Check out the Accio docs on how to add a install.
Add the following to your Package.swift:
.package(url: "", .upToNextMajor(from: "4.0.0")),
Next, add
Starscream to your App targets dependencies like so:
.target( name: "App", dependencies: [ "Starscream", ] ),
Then run
accio update.
Rogue: 4) ]
Other
If you are running this in an OSX app or on a physical iOS device you will need to make sure you add the
Starscream.framework to be
Starscream.framework. Click on the + button at the top left of the panel and select "New Copy Files Phase". Rename this new phase to "Copy Frameworks", set the "Destination" to "Frameworks", and add
Starscream.framework respectively.
TODOs
- [ ] Proxy support
License
Starscream is licensed under the Apache v2 License.
Dalton Cherry
Austin Cherry
*Note that all licence references and agreements mentioned in the Starscream README section above are relevant to that project's source code only. | https://swift.libhunt.com/starscream-alternatives | CC-MAIN-2021-17 | refinedweb | 751 | 59.9 |
Eclipse Community Forums - RDF feed Eclipse Community Forums XML: Outline view does not show namespace when add <![CDATA[Hi, I use the outline view to add new elements to my doc. The elements displayed in the outline view have the namespace prefix. However when you rightclick on an element, and select add before / after, the list of elements do NOT have he namespace. In my case i have elements with the same name in different namespace. so its impossible to distinguish between them. My doc looks like this in Outline view <ns1:RootElement> <ns1:name> <ns2:name> If i right click on ns2:name to add an element, my .xsd allows either <ns1:name> or <ns2:name> to be added. But the Outline view shows the allowable things as "name" and "name" so its impossible to distinguish. Ive attached screenshot and sample .xml and .xsd files. Im using Eclipse Indigo. Thx derek ]]> Derek Wallace 2012-09-02T08:31:07-00:00 Re: XML: Outline view does not show namespace when add <![CDATA[Please open a proper bug report: , with more detail about the steps that do work for you.]]> Nitin Dahyabhai 2012-09-10T06:00:59-00:00 Re: XML: Outline view does not show namespace when add <![CDATA[This issue was addressed by, it already was fixed, so it is going to be available in further offerings.]]> Salvador Zalapa 2013-01-11T22:17:45-00:00 | http://www.eclipse.org/forums/feed.php?mode=m&th=372005&basic=1 | CC-MAIN-2014-42 | refinedweb | 237 | 75.1 |
I am just starting (like today) to program in CUDA. So i copied the makefile off one of the SDK examples and renamed some variables to make it make my own file.
The problem is that i cant get the compiler to recognise the built in variable threadIdx. I have a very simple program and i am using the command make emu=1 so that i can have printf statements in there.
Here is my code
#include <stdlib.h> #include <stdio.h> #include <string.h> #include <math.h> // includes, project #include <cufft.h> #include <cutil.h> void some(){ int test = threadIdx.x; printf("%i\n",test); } int main(int argc, char** argv) { some(); CUT_EXIT(argc, argv); }
and the compiler error message i get is:
-bash-3.2$ make emu=1 dave1.cu: In function 'void some()': dave1.cu:13: error: 'threadIdx' was not declared in this scope make: *** [obj/emurelease/dave1.cu_o] Error 255 -bash-3.2$
What am i doing wrong? | https://forums.developer.nvidia.com/t/a-very-simple-problem/3939 | CC-MAIN-2020-29 | refinedweb | 162 | 80.17 |
Introduction
As I have discussed in my previous article about windows and WPF applications using F#. Now I am taking one step Forward towards Silverlight application using F#. If you have Visual Studio 2010 you can make F# silverlight application with in one step.
If you are using an older version of Visual Studio like 2005 or 2008 than you also need to add one Xaml file, Html file and also a zip file with xap extension which will contain SilverlightFSharp.dll, System.Windows.Control.dll, FSharp.Core.dll and AppManifest.xaml to make Silverlight application in F#.
Here I am using Visual Studio 2010 and taking a simple example of Button clicking Event. When you make a project using F# silverlight application Template in Visual Studio 2010. You will get two files in solution Explorer one is MainPage.fs and Second one is App.fs. You will open MainPage.fs and write your code in this file.The below code block will use to add Button control in application.//
Note The complete code you can see in step 3.
Getting Started With Silverlight Application using F#
If you do not have F# Silverlight application Template in your Visual Studio 2010 you Can download it from below given link.
Step 1: Firstly Open a new Project in F# using Visual Studio 2010. Select F# Silverlight Application template and give name to the Project like below image.
Step 2: Now your solution explorer will contain two files MainPage.fs and App.fs as you can see in below image.
Step 3: Then Click on MainPage.fs file and write below code, your MainPage.fs window will look like below image.
namespace FSharpSilverlightApp open Systemopen System.Windowsopen System.Windows.Controlsopen System.Windows.Mediaopen System.Windows.Media.Imaging type FirstApp = class inherit UserControl new () as this = {} then // end type SilverApp = class inherit Application new () as this = {} then this.Startup.Add(fun _ -> this.RootVisual <- new FirstApp()) //base.Exit.Add( fun _ -> ()) //this.Application_Exit) //this.InitializeComponent() end Step 4: Now press F5 to Execute the code and your first Silverlight application is ready.
Output
Summary
In this article I have discussed about how you can make a Silverlight application using F#.
©2015
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/UploadFile/f5b919/a-simple-application-in-silverlight-using-fsharp/ | CC-MAIN-2015-48 | refinedweb | 377 | 60.51 |
Fixtures for testing Google Appengine (GAE) apps
Convenience plugin on top of the testbed the Google Appengine (GAE) SDK already provides.
Install
pip install pytest-beds
After that the plugin is enabled by default. You can use specific fixtures (see below) to activate the Testbed and stub specific services.
Options
- --no-gae
- Disable the plugin, esp. do not change the python paths and try to import dev_appserver.
- --sdk-path PATH
- The plugin assumes it can just import dev_apserver. If that fails it looks up the SDK path in the environment variable GAE. Otherwise, you can specify the path to the SDK by using the --sdk-path PATH option.
- --project-root PATH
- Secondly, the plugin assume that your current path is the projects root folder, t.i. the dirctory which holds the app.yaml. You can specify a different path using --project-root PATH.
- --noisy-tasklets
- By default the plugin shortens the tracebacks when using ndb tasklets, so they don’t include the eventloop’s internal noise. Use this switch to make ndb noisy again.
Fixtures
The plugin provides fixtures to stub the different services. Usage is therefore simple and straightforward:
# Say, if you create a Foo you hit the database and put some work on queue def test_foo(ndb, taskqueue): foo = Foo.create() assert Foo.query().fetch() == [foo]
List of builtin fixtures:
bed mailer channel urlfetch memcache taskqueue blobstore ndb users
Users
There are two fixtures anonymous and login to handle the users-stub.
- anonymous
- Prepares the user stub so that users.get_current_user() will return None
Prepares the user stub and returns a function to login actual users:
def test_login(login): # at this point users.get_current_user() will return None login(id=1, email='foo@gmail.com') # now users.get_current_user() will return a user login.logout() # now users.get_current_user() will return None again
Deferreds
The deferreds fixture inits the taskqueue stub, but returns a useful object, so you can actually run the deferred functions:
def test_work(deferreds): deferred.defer(work, 'to be done') deferreds.consume() assert 'work has been done'
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pytest-beds/ | CC-MAIN-2018-05 | refinedweb | 360 | 67.65 |
If you're new to Python
and VPython: Introduction
A VPython tutorial
Pictures of 3D objects
What's new in VPython 6
VPython web site
VPython license
Python web site
Math module (sqrt etc.)
Numpy module (arrays)
In this section we describe features for plotting graphs
with tick marks and labels. In the image above the user is dragging the mouse to see the coordinates at the intersection of the crosshairs. This image comes from the VPython demo program graphtest.py. Graphs can also be log-log or semilog (see below).
Here is a simple example of how to plot a graph (arange creates a numeric array running from 0 to 8, stopping short of 8.1):
from visual import * # must import visual or vis first
from visual.graph import * # import graphing features
f1 = gcurve(color=color.cyan) # a graphics curve
for x in arange(0, 8.05, 0.1): # x goes from 0 to 8 gcurve, gdots, gvbars, or ghbars, you can optionally give a list of data points, such as pos=[(10,20), (50,30), (-5,-3)], or specify
a color, such as color=color.cyan. After creating one of these graphing objects, you can add a single point or a list of additional points:
f1.plot(pos=(100,-30)) # add a single point
f1.plot(pos=[(100,-30),(20,50),(0,-10)]) # add a list
When you add points, you can optionally specify a color for these points that is different from the default color of the object:
f1.plot(pos=(100,-30), color=color.red)
You can plot more than one thing on the same graph:
f1 = gcurve(color=color.cyan)
f2 = gvbars(delta=0.05, color=color.blue)
for x in arange(0., 8.05, 0.1):
f1.plot(pos=(x,5*cos(2*x)*exp(-0.2*x))) # curve
f2.plot(pos=(x,4*cos(0.5*x)*exp(-0.1*x)))# vbars
Special options for gcurve, gdots, gvbars, and ghbars
For gcurve if you
specify dot=True the current plotting
point is highlighted with a dot, which is particularly useful if
a graph retraces its path. You can specify a shape attribute "round" or "square" (default
is shape="round") and a size attribute,
which specifies the width of the dot in pixels (default is size=8).
By default the dot has the same color as the gcurve, but you can
specify a different color, as in dot_color=color.green.
For gdots you can specify a shape attribute "round" or "square" (default is shape="round") and a size attribute, which
specifies the width of the dot in pixels (default is size=5).
For gvbars and ghbars you can specify a delta attribute, which
specifies the width of the bar (the default is delta=1).
If you say g = gcurve(), g.gcurve is a curve object used to represent the gcurve.
If you say g = gdots(), g.dots is a points object used to represent the gdots.
If you say g = gvbars(), g.vbars is a faces object used to represent the vbars.
If you say g = ghbars(), g.hbars is a faces object used to represent the hbars.:
gd = gdisplay(x=0, y=0, width=600, height=150,
title='N
vs. t', xtitle='t', ytitle='N',
foreground=color.black,
background=color.white,
xmax=50,
xmin=-20, ymax=5E3, ymin=-2E3)
In this example, the graph window will be located at (0,0),
with a size of 600 by 150 pixels, and the title bar will say 'N vs. t'.
The graph will have a title 't' on the horizontal axis and 'N' on the vertical
axis. The foreground color (white
by default) is black, and the background color (black by default) is white.
Instead of autoscaling the graph to display all the data, the graph in this example
will have fixed limits. The horizontal axis will extend from -20 to +50,
and the vertical axis will extend from -200 to +5000. If you specify xmax but not xmin, it is as though you had also specified xmin to be 0; similarly, if you specify xmin but not xmax, xmax will be 0. The same rule holds for ymax and ymin.
Offsets: If you specify xmin or ymin to be greater than zero, or xmax or ymax to be less than zero, the crossing point (origin) of the x and y axes will no longer be at (0,0), and the graphing will be offset. If you offset the origin of the graph, you must specify xmax to be greater than xmin, and/or ymax to be greater than ymin.
If you simply say gdisplay(), the defaults
are x=0, y=0, width=800, height=400,
no titles, fully autoscaled.
Every gdisplay has the attribute display,
so you can place additional labels or manipulate the graphing window.
The only objects that you can place in the graphing window are labels,
curves, faces, and points.
graph1 = gdisplay()
label(display=graph1.display, pos=(3,2), text="P")
graph1.display.visible = False # make the display invisible
You can have more than one graph window: just create another gdisplay. By default, any graphing objects
created following a gdisplay belong to that
window, or you can specify which window a new object belongs to:
energy = gdots(gdisplay=graph2, color=color.blue)
Log-log and semilog plots
When creating a gdisplay,:
from visual import * # must import visual or vis first
from visual.graph import *
.....
agelist1 = [5, 37, 12, 21, 8, 63, 52, 75, 7]
ages = ghistogram(bins=arange(0,80,20), color=color.red)
ages.plot(data=agelist1) # plot the age distribution
.....
ages.plot(data=agelist2) # plot a different distribution
You specify a list (bins) into which data will be sorted.
In the example given here, bins goes from 0 to 80 by 20's. By default, if
you later say
ages.plot(data=agelist2)
the new distribution replaces the old one. If on the other hand you say
ages.plot(data=agelist2, accumulate=True)
the new data are added to the old data.
If you say the following,
ghistogram(bins=arange(0,50,0.1), accumulate=True,
average=True)
each plot operation will accumulate the data and average the
accumulated data. The default is no accumulation and no averaging.
gdisplay vs. display
A gdisplay window is closely related to a display window. The main difference
is that a gdisplay is essentially two-dimensional and has nonuniform x and
y scale factors. When you create a gdisplay (either explicitly, or implicitly
with the first gcurve or other graphing object), the current display is saved
and restored, so that later creation of ordinary VPython objects such as sphere
or box will correctly be associated with a previous display, not the more
recent gdisplay. | http://www.vpython.org/contents/docs/graph.html | CC-MAIN-2015-48 | refinedweb | 1,121 | 64.3 |
Hi,
I am sorry if this has been asked before. I couldn't find it anyway.
I am wondering when Code->Implement/Override Methods will be available for Swift. The options are there in the menu but they are greyed out despite my code clearly having the need to implement a method from an interface (protocol).
Example:
import UIKit
class ViewController: UIViewController, UITableViewDataSource, UITableViewDelegate {
}
Error:(13, 1) type 'ViewController' does not conform to protocol 'UITableViewDataSource'
[Option Implement Methods still greyed out]
I am completely new to iOS/OSX development so I might have misunderstood something fundamental.
Thanks in advance.
/Pär Eklund
Think it's there to tease you... no seriously Jetbrains are just catching up and they will get around to implementing this feature... but i would imagine they are working full pelt on higher priority items... such as completion for Cocoa methods and many others.
Thank you for the report, I've filled feature request, you can follow its status. Also feel free to submit reports directly to our bugtracker.
Thank you, Maria.
Just to clarify the reason for my question:
Since the option is there but greyed out and that the release of AppCode 3.1 (with Swift support) seems to be imminent, I was under the impression that Implement/Override Methods were scheduled for an AppCode 3.1 release. As I understand your reply, this is not the case. Is that correct and if so, when can this feature be expected?
I am definitely not complaining. I think JetBrains is doing a fantastic job. It's just that having to manually add all non-implemented methods for any interface I implement feels like a big step back for a spoiled Java programmer / IntelliJ user. ;)
Thanks
/Pär
Yes, you're right. And unfortunately for now there're no estimations for this feature since we have a lot of priority tasks. You can vote for the issue to prioritize it. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206584975-Implement-Methods-for-Swift | CC-MAIN-2020-29 | refinedweb | 321 | 57.98 |
Regards,
Advertising
Patrick Hiram Chirino wrote:
Anyone out there? On Mon, Nov 3, 2008 at 9:23 AM, Hiram Chirino <[EMAIL PROTECTED]> wrote:Congrats on the release. Now that has been completed, I'd like to see if you guys are willing to revisit the issue of a maven based build. If yes, I'd be happy to assist making that happen. Regards, Hiram On Mon, Oct 27, 2008 at 10:35 PM, Patrick Hunt <[EMAIL PROTECTED]> wrote:Our first official Apache release has shipped and I'm already looking forward to 3.1.0. ;-) In particular I believe we should look at the following for 3.1.0: 1) there are a number of issues that we're targeted to 3.1.0 during the 3.0.0 cycle. We need to review and address these. 2) system test. During 3.0.0 we made significant improvements to our test environment. However we still lack a large(r) scale system test environment. It would be great if we could simulate large scale use over 10s or 100s of machines (ensemble + clients). We need some sort of framework for this, and of course tests. 3) operations documentation. In general docs were greatly improved in 3.x over 2.x. One area we are still lacking is operations docs for design/management of a ZK cluster. see 4) JMX. Documentation needs to be written & the code reviewed/improved. Moving to Java6 should (afaik) allow us to take advantage of improved JMX spec not available in 5. We should also consider making JMX the default rather than optional (ie you get JMX by default when ZK server is started). We need to ensure that ops can monitor/admin ZK using JMX. 5) (begin) multi-tenancy support. A number of users have expressed interest in being able to deploy ZK as a service in a cloud. Multi-tenancy support would be a huge benefit (quota, qos, namespace partitioning of nodes, billing, etc...) Of course ZooKeeper is open to submissions in that aren't on this list. If you have any suggestions please feel free to enter a JIRA or submit a patch. Additionally I'd like to see us move to an 8 week release cycle. I've updated the JIRA version list to reflect this. Due to the holiday season approaching I've listed 3.1.0 with a ship date of Jan 19th. (see the roadmap on the JIRA). If you have any questions/comments please reply to this email. Patrick-- Regards, Hiram Blog: Open Source SOA | https://www.mail-archive.com/zookeeper-user@hadoop.apache.org/msg00068.html | CC-MAIN-2017-04 | refinedweb | 426 | 68.57 |
Choosing a Windows Embedded API: Win32 vs. the .NET Compact Framework
Written by:
Paul Yao, Windows Embedded MVP
The Paul Yao Company
September 2002
Applies to:
Microsoft® Win32®
Microsoft .NET Compact Framework
Microsoft Windows® CE 3.0
Microsoft Windows CE .NET
Microsoft Embedded Visual Tools 3.0 and 4.0
Microsoft Embedded Visual C++® 3.0 and 4.0
Microsoft Embedded Visual Basic®
Microsoft Visual Studio® .NET
Microsoft ASP.NET Mobile Controls
Microsoft ADO.NET
Contents
Development Tools
Win32 - The Assembly Language of Windows
.NET Compact Framework - "Rapid Application Development"
Connecting Win32 and .NET Compact Framework Code
Conclusion
Summary: This article continues the analysis from another article, Application Development Landscape for Windows CE .NET, where three Windows CE APIs were compared: Win32, MFC, and the .NET Compact Framework. This article focuses on two of these APIs—Win32 and the .NET Compact Framework —to provide details on selecting an API for specific programming tasks. The choice of API ultimately dictates the choice of development tool: Embedded Visual C++ 3.0/4.0 or Visual Studio .NET. (14 printed pages)
An important piece of planning any development project is deciding which application program interface (API) to use. Picking an API is a very important task. You are making a decision that will affect every other aspect of your project. In developing embedded applications, the issue is whether to use the Microsoft® Windows® 32-Bit API (Win32®) or the Microsoft .NET Compact Framework.
The choice is not that hard to make. Most software will either use Win32 alone, or use a blend of Win32 for low-level code and the .NET Compact Framework for high-level code. For a detailed description of the features of both APIs, see Application Development Landscape for Windows CE .NET.
Win32 is the core API of Microsoft Windows CE. If the operating system supports a feature, it must by definition be supported in Win32. For headless devices, your only choice is Win32. For certain kinds of operating system extensions, again, your only choice is Win32. If you want something that will be portable to a wide range of Windows CE platforms, such as a device driver, using Win32 makes a lot of sense.
Win32 and the .NET Compact Framework are each portable in their own way. The .NET Compact Framework has binary portability, so that a single executable file can run on different CPUs, such as StrongARM, XScale, MIPS, SH3, SH4, and so on. But the .NET Compact Framework is not supported on some Windows CE devices, notably those that have no display screen. Even some display-based devices do not support enough of the required Win32 API to support the .NET Compact Framework.
The Win32 API, on the other hand, does not have binary portability. For example, if you are using the Win32 API to build an application to run on two platforms, MIPS and SH3, you must ship two executable files. Instead, Win32 has source-code portability, so you use the same source code to build those two executable files. Win32 will be present on a Windows CE system even if the .NET Compact Framework cannot be supported. This makes Win32 the best choice for low-level components that must run on many different configurations of Windows CE.
The .NET Compact Framework provides the ability to build a dialog-style user interface by using one of two languages: C# (in the C family of languages) or Microsoft Visual Basic® .NET (in the Visual Basic family of languages). You can also build traditional graphical user inferface (GUI) applications that are not dialog-box style by processing input and drawing onto the application's main window (called a "form" in .NET). There is great support for database management in the .NET Compact Framework, especially for Microsoft SQL Server™ CE. There is also great support for creating in-memory databases, otherwise known as DataSet and DataTable, by using Microsoft ADO.NET. And if you want to manipulate eXtensible Markup Language (XML) data, or create a Web Service client, you will find a lot to help you out in the .NET Compact Framework.
The rest of this article digs deeper into the subject of when to use Win32 and when to use the .NET Compact Framework.
Development Tools
When you pick an API, you automatically pick the development tool to access the API. The following table summarizes four application development tools for Windows CE and the platforms and APIs they support for Microsoft Windows CE-based development.
* Separate SDK download required.
To build Win32 applications or dynamic link libraries (DLLs), you can use Microsoft Embedded Visual C++® (eVC++) version 3.0, for Windows CE 3.0-based platforms, or version 4.0, for Windows CE .NET-based platforms. If you want to build for both versions of Windows CE, use Embedded Visual C++ version 3.0. You won't lose many features, except support for C++ exceptions, which are only supported in eVC++ 4.0.
To build a .NET Compact Framework application, you will need Microsoft Visual Studio® .NET 2003. A useful feature of this environment is that you can drag-and-drop controls from a toolbox onto a form, then click on elements in the form to add code behind the controls. Built-in IntelliSense shows you the properties, methods, and events that are supported for each control. Programmers who have worked with Visual Basic, either on the desktop or as Embedded Visual Basic, will find much that is familiar in the development environment.
Microsoft announced in the fall of 2001 that the available features of Embedded Visual Basic (eVB) would be frozen with the set available in version 3.0. Also, eVB will not be ported to new CPUs, such as XScale, as those become available. Programmers who want to continue working with the Basic language should plan to work with the .NET Compact Framework, which supports the Visual Basic .NET programming language. It also provides a drag-and-drop development environment that will be very familiar to eVB developers.
A .NET Compact Framework application can call Win32 dynamic link libraries, including the system's core library, COREDLL.DLL, by using a feature called "Platform Invoke." This is just one of several mechanisms that can be used to interoperate between the two APIs.
Win32—The Assembly Language of Windows
The Win32 API is the core programming interface for Windows CE. In the early 1990s, Microsoft announced that its two core strategic technologies were Win32 and the Component Object Model (COM). The Win32 API is supported on all Windows operating systems, including 16-bit systems (Microsoft Windows 95, Windows 98, and Windows Millennium Edition) and 32-bit systems (Microsoft Windows NT®, Windows 2000, and Windows XP).
Windows CE was the first Microsoft operating system to use the Win32 API for both device drivers and applications. Sixteen-bit Windows systems use the Virtual (VxD) drivers at their lowest layer, and 32-bit Windows systems have a proprietary kernel-mode API that is decidedly not Win32. Therefore, Windows CE is more deeply connected to Win32 than any other Microsoft operating system.
Win32 has been referred to as the "assembly language of Windows," because it is a very low-level API with very primitive functions. Just as multiple machine language instructions are needed to accomplish even the simplest task, multiple Win32 function calls are often needed to do real work. This is, however, the API that all other APIs and development tools ultimately rely on to get things done.
In the context of the .NET Framework, Win32 code is referred to as "unmanaged code." This is because the Common Language Runtime does not manage the memory, or guarantee the security and type-safety, of Win32 code. .NET code, by contrast, is called "managed code" because all these features and more are provided by the .NET runtime. Understanding the distinction between managed and unmanaged code is key to understanding the difference between these two APIs.
When to Use Win32
The Win32 API provides the ability to create low-level components, like device drivers and DLLs that extend the operating system. Win32 can also be used for applications that must run on headless (HLBASE-derived) Windows CE .NET platforms. Here are some basic features of Win32 that make it the best choice, or the only available choice, for certain applications:
- Fastest executables
- Best real-time support
- Source code (inter-platform) portability
- Ability to wrap COM for access by .NET Compact Framework applications
- Ability to create device drivers
- Ability to create control panel applets
- Support for custom user-interface skins
- Support for security extensions
- Ability to build Simple Object Access Protocol (SOAP) Web Servers
- Support for Pocket PC shell extensions
- Ability to use existing Win32 code
Fastest executables
Win32 provides the fastest executables. Part of the reason is that Win32 executables ship as native machine instructions. .NET Compact Framework executables, by contrast, ship as Microsoft Intermediate Language/Common Intermediate Language (MSIL/CIL), which must be converted to native code. This conversion takes time, and you cannot anticipate when it might occur.
The IL-to-native conversion takes place when a page of code is decompressed from the object store and/or read-only memory (ROM) and moved to program memory. This conversion is only needed when the code is brought into program memory, so after that the code can be executed with no further conversion required, as long as it is not deleted by the system memory manager.
Another aspect of .NET Compact Framework code that can cause delays is the Garbage Collector. The Garbage Collector moves objects on the heap as part of its operation. Any managed thread that is running in a process might need to access one or more object on the heap, so to prevent this, all threads are halted. There is no question that the Garbage Collector provides a valuable service, but there is no way to schedule or control when it will run. This means that the execution of managed code might be inconsistent.
Best real-time support
The recommendation to use Win32 for real-time support is related to its support of the fastest executables. At its core, real-time processing demands both a correct algorithm and a timely algorithm. Real-time handling is used for data collection, as well as control for devices as varied as robots in a manufacturing arena or mice and keyboards used for input.
Real-time support means more than just doing things as fast as possible. Windows CE real-time support provides a guarantee of consistency for the highest-priority thread in the system and for the highest-priority interrupt handler. Windows CE supports 256 thread priorities though the
CeSetThreadPriority function. It also provides the ability to manipulate the scheduling quantum of individual threads with the
CeSetThreadQuantum function.
As described earlier, the Garbage Collector requires all managed threads to halt. But threads running in native code can continue running even when the Garbage Collector is running. If such threads return to managed code while the Garbage Collector is running, they are blocked until the Garbage Collector is finished. This means that you can feel confident that a Win32 thread running native code can co-exist peacefully with managed code.
Source code (inter-platform) portability
Both Win32 and .NET Compact Framework applications provide a degree of portability: Win32 provides source-code portability, and the .NET Compact Framework provides binary portability. Win32 portability serves you when you want some source code to run on a wide range of Windows CE platforms, even on platforms without the .NET Compact Framework runtime. The limitations on Win32 executables is that they rely on having the necessary Win32 functions and are CPU-dependent.
Windows CE is a highly configurable operating system. This means that the set of Win32 functions supported on one platform might not match the set of Win32 functions on a second platform. Even when using the Win32 API, getting the portability to a broad range of platforms requires some diligence.
A set of functions corresponding to the "Tiny Kernel" configuration in Platform Builder are guaranteed to be on every Windows CE platform. (This was called MINKERN in earlier versions of the Platform Builder.) Among the features supported are the following kernel services:
- Module functions (LoadLibrary, FreeLibrary, GetProcAddress)
- Thread functions (CreateThread, Sleep, and so on)
- Synchronization functions (critical sections, mutexes, semaphores)
- File I/O functions
- Registry functions
- Memory allocation functions (VirtualAlloc, HeapCreate, LocalAlloc, and so on)
- C-runtime string functions (wcscpy, wcslen, and so on)
- Point-to-point queue functions (for example, CreateMsgQueue)
- Serial communications support (SetCommMask, GetCommState, and so on)
The Windows CE .NET Platform Builder defines a Standard SDK component, which, when added to a platform, includes a baseline set of operating system features. Intended only for display-based systems—those built on the Internet Appliance Base Files (IABASE) core—this baseline defines a core set of features to make it easier to write components that can run on a wide range of Windows CE .NET-based platforms. For details, see the Standard SDK for Windows CE .NET.
Ability to wrap COM for access by .NET Compact Framework applications
While the desktop .NET Framework has support for interoperability with the Component Object Model (COM), the .NET Compact Framework has no such support. You must build a Win32 DLL with a set of wrapper functions around any Win32 ActiveX®/COM library that you want to use.
The success of this approach depends on the kind of component you are trying to call. For components with little or no user interface, this support should work well. Here are some of the Windows CE system services that come packaged with one or more COM components:
- Pocket Outlook® Object Model (POOM)
- DirectX® Multimedia API, including Direct3D® (3-dimensional drawing), ActiveMovie®, DirectMusic®, and DirectPlay®
- Mail API (MAPI)
- Object Exchange (OBEX)
- OLE Database API (OLEDB)
- Simple Object Access Protocol (SOAP)
- Pocket Internet Explorer Web viewer window
- Bluetooth API
- Internet Explorer 5.5 add-ins
- Universal Plug and Play (UPnP)
- Access structured-storage files
- Access COM automation servers
Ability to create device drivers
All device drivers should be written using Win32. Some of the reasons have already been mentioned: size, speed, real-time support, and portability to the broadest range of platforms.
The other, architectural, reason is that device drivers are always dynamic link libraries built with C-callable exported functions. The .NET Compact Framework does not support the creation of this kind of DLL, although you can build .NET-compatible DLLs.
This is the first item in the list that falls into the category of "operating system extensions." All such operating system extensions are dynamic link libraries, and as such you will implement all of them by using Win32.
Ability to create control panel applets
You have the ability to add new icons to the Control Panel in Windows CE, just as you do on the desktop. This provides a centralized place for users to find and change system settings, and is particularly important for otherwise invisible services and device drivers that are installed on a system.
A control panel applet is a Win32 DLL that exports a function named
CplApplet. These are loaded, and the associated dialog boxes are displayed, by the Control Panel as needed. Platform Builder provides source code for the Control Panel, along with the source code for several example control panel applets, at \WINCE400\public\WCESHELLFE\OAK\CTLPNL\CPLMAIN.
Custom user-interface skin
Windows CE .NET provides the ability to change the user-interface skin of the operating system. This is similar to the concept of owner-draw controls, by which a Win32 program can change the appearance of various controls like push buttons, status bars, header controls, ListView controls, and the Tab control. (The desktop supports other owner-drawn items, like the Listbox, that are not supported in Windows CE.) An owner-draw push button, for example, can display a bitmap of a fish, an animated sequence, or any other graphic image desired by the creator. Windows CE .NET provides, in short, full control over the appearance of a control.
The user-interface skin gives a platform developer the ability to make the same types of changes in the non-client areas of windows that owner-draw controls provide for the client areas. A skin lets you change system colors, the tiny widget bitmaps for scroll bar arrows, check boxes, radio buttons, and other small system images. You can also change the look of most of the system controls.
Windows CE .NET ships with two standard skins: a Windows 95 look and a Windows XP look. When building a custom platform, you can modify these to make your platform look quite distinct and different from other Windows CE-based platforms.
This feature is enabled by modifying a set of C++ source files that are built at platform creation time and merged into the Graphics, Windowing, and Events Subsystem (GWES). Only Win32, then, can be used to build user-interface skins.
If you want to build a custom shell, the approach discussed above is to add a new skin to an existing shell. Another alternative is to write a shell from scratch. The key benefit you gain here is more control over what the user can see and do. Platform Builder for Windows CE .NET contains a sample shell in the following location: .\Public\wceshellfe\Oak\Taskman.
Support for security extensions
Windows CE has had support for enhancing the security of network communications since version 2.0. This support includes Secure Socket Layer (SSL), the encryption API (CRYPT32.DLL), and X.509 digital certificates. Windows CE .NET adds support for the Security Support Provider Interface (SSPI), a feature first seen in Windows 2000. An SSPI is a Win32 DLL that exports a single function named InitSecurityInterfaceW.
Two security providers are available for Windows CE .NET: LAN Manager and Kerberos. By using SSPI, you could add your own proprietary, custom security mechanism to a Windows CE .NET-based platform.
On the subject of security, it is worth pointing out that .NET Compact Framework executables are arguably less secure than Win32 executables for proprietary algorithms. One reason is that tools like the IL Disassembler, ILDASM.EXE, allow anyone to dump the machine instructions to your modules. This can also be done for Win32 modules, but .NET binaries contain metadata which provides the names of properties, methods, and events, among others, to outside eyes. One way to combat unwanted disclosure of technical details is to use a tool, which some call an obfuscator, that changes the metadata to make it less intelligible. Developers who are concerned about hiding proprietary information will no doubt want to dig deeper into the issue of whether Win32 or the .NET Compact Framework provides the level of protection required.
Ability to build SOAP Web Servers
Web Services provide the ability to call a function across a network, whether the network is an intranet or the Internet. This represents the next step in the evolution of distributed computing. To support this, the .NET Compact Framework has the ability to create Web Service clients. In other words, a .NET Compact Framework application can declare and call functions that reside on a SOAP-compliant Web Server. This includes not just ASP .NET-based Web Servers, but any server that supports the industry-standard SOAP protocol.
While the .NET Compact Framework allows you to create Web Service clients, it does not provide support for building Web Service servers. You can, however, host a Web Server on Windows CE .NET. Doing so involves using the SOAP 2.0 toolkit that ships with Platform Builder, and building the necessary COM objects, which themselves are Win32 DLLs, to make your Web Service available.
Support for Pocket PC shell extensions
The Pocket PC shell allows for some refinements, all of which must be implemented with Win32. This includes the Today screen, custom Software Input Panel (SIP) modules, and others.
Ability to use existing Win32 code
If you have code that is working in Win32, you should keep that code in Win32. Unless you have some overriding need to port your code to the .NET Compact Framework, you are better off keeping that code as Win32 code. Someone once suggested 10x as a guideline—unless you get ten times the benefit from a new technology, you should continue using the older technology.
You might need to package existing Win32 code in a way that is accessible from the .NET Compact Framework. The end of this article summarizes the supported mechanisms for Win32-to-.NET Compact Framework interoperability.
.NET Compact Framework—"Rapid Application Development"
Whereas Win32 is good at creating low-level code for device and operating-system support, the .NET Compact Framework is good for interacting at a high level. It is best suited for building an application with a user interface that collects data, stores it in a local database, and, from time to time, forwards that data to a server-based database.
As mentioned earlier, Win32 code is referred to as unmanaged code, and .NET code is called managed code. The term "managed code" refers to the fact that the Common Language Runtime (CLR), which might also be called the "code manager," provides several assurances for such code:
- Managed code cannot have bad pointers.
- Managed code cannot create memory leaks.
- Managed code supports strong type-safety.
A key benefit of managed code is that certain common errors that plague Win32 programmers are handled for you by the managed development environment.
When to Use .NET Compact Framework
Here are some situations in which it makes sense to use the .NET Compact Framework. The following sections discuss these in detail.
- Using platforms that have the .NET Compact Framework
- Building user interfaces
- Building custom controls
- Achieving binary portability to multiple CPUs
- Building Web Service clients
- Building data-intensive and database-intensive applications
- Building XML-intensive applications
- Using existing .NET Framework code
Using platforms with the .NET Compact Framework
To be able to run .NET Compact Framework applications, platforms must support the .NET Compact Framework runtime. The .NET Compact Framework requires a minimum of Windows CE .NET (Windows CE version 4.1 for the released version of the .NET Compact Framework), with two exceptions: Microsoft Pocket PC and Microsoft Pocket PC 2002, both of which run Windows CE version 3.0, support the .NET Compact Framework. .NET Compact Framework support for Microsoft Smartphone is also expected in early 2003.
A second requirement for running .NET Compact Framework applications is that platforms must support a sufficient subset of the Win32 API, the foundation on which the .NET Compact Framework was built. The set of Win32 APIs that are required to support .NET Compact Framework must be present. .NET Compact Framework cannot run on headless Windows CE .NET-based platforms, which include Tiny Kernel, Media Appliance, and Residential Gateway. To run the .NET Compact Framework, a Windows CE .NET platform must have a display screen—that is, it must be an IABASE-based platform.
The easiest way for platform developers to ensure support is to include the .NET Compact Framework binaries. At present, the Compact Framework binaries are in beta, and as such cannot be incorporated into a Windows CE .NET-based platform. After the .NET Compact Framework is released, platform developers can include the .NET Compact Framework binaries with the Windows CE .NET operating system image itself.
Building user interfaces
The .NET Compact Framework does a very good job of building a user interface. The Visual Studio .NET Forms Designer allows you to drag and drop controls from the toolbox onto a form. The Properties window helps identify supported events and properties for the various controls, all from a straightforward user interface. Programmers who are used to building applications for the desktop .NET Framework will find many similarities.
But there are differences. Although 28 of the 35 desktop controls are supported in the .NET Compact Framework, each control has been refined to meet the size and performance requirements of Windows CE. For this reason, a subset of the desktop Properties, Methods, and Events (PMEs) are supported in the .NET Compact Framework controls. The specific set of PMEs depends on the specific class. For the base Control class, 27 of 76 properties, 35 of 182 methods, and 17 of 58 events are supported. The biggest impact will be on code that you write on the desktop and wish to port to the .NET Compact Framework.
Nonetheless, you will find a very capable set of controls, including such favorites as the DataGrid and the TreeView control, that GUI programmers are accustomed to using.
Building custom controls
Building custom controls is related to building a user interface. The .NET Compact Framework ships with 28 custom controls, including all of the standard ones that Windows users expect: text editing windows (TextBox), push buttons (Button), and various controls to display lists (ComboBox, ListBox, and ListView).
If, however, these built-in controls do not provide the support you need, you can build custom controls. one of the most common approaches involves inheriting from the base control class named Control, and adding custom support for your own properties, events, and methods.
Achieving binary portability to multiple CPUs
On a platform with .NET Compact Framework, a single .NET Compact Framework executable file will run on multiple CPUs. The key benefit here is that it simplifies the setup process when targeting platforms that support multiple CPUs. This will be particularly helpful for .NET Compact Framework libraries with custom controls and other useful, multi-platform widgets.
Building Web Service clients
One of the great strengths of the desktop .NET Framework as well as the .NET Compact Framework is the ease of creating Web Service clients. A Web Service client allows a network function call to be performed. It represents the next step forward in the long evolution of distributed computing technologies that include Remote Procedure Calls (RPCs) and the Distributed Component Object Model (DCOM).
A Web Service provides function-call semantics to information that resides on a Web Server. Web Servers have traditionally served up reams and reams of HTML, which is a human-readable tag language, but Web Services use the XML tag language, which is more of a machine-readable markup language. Web Services use the set of XML that is specified in the Simple Object Access Protocol (SOAP). The use of this industry-standard protocol means that .NET Compact Framework applications can access Web Services from a broad, heterogeneous set of Web Server vendors. Support for Web Services adds yet another option to the distributed, wireless, "always-connected" model of computing.
Building data-intensive and Database-intensive applications
Another reason to use the .NET Compact Framework is to take advantage of the great data handling and database support it provides. On the data handling front, the .NET Compact Framework has the Garbage Collector to handle cleanup. Memory leaks and the resulting heap overflows are much less likely with the .NET Compact Framework, owing to the Garbage Collector.
Like other class libraries, the .NET Compact Framework has container classes to help with your data-handling chores. You can choose between arrays, lists, hash tables, dictionaries, queues, and stacks. Each of these helps you organize and sort through a wide range of in-memory data objects. Win32, by contrast, has no built-in container classes.
The .NET Compact Framework supports binding data to controls. This is the ability to insure synchronization between a user-interface control, like a TextBox control, and the data source. Data binding lets you set once and forget, without having to continually worry about initializing controls when they first appear, and harvesting changes when a control is about to be destroyed.
And finally, there is ADO.NET. ADO.NET provides both local and remote data access. Local data access involves working with a database in a local file system and manipulating data sets in memory. Remote data access involves accessing remote databases, such as one hosted in SQL Server 2000, from a platform. Both are supported by the .NET Compact Framework. Among the data providers that are shipped with the .NET Compact Framework are ones that support access to SQL Server CE, and also to server-based SQL Server.
Building XML-intensive applications
The underlying storage format of ADO.NET is XML, so if you use the ADO.NET services you are using XML. But if you are doing other things that require XML support, such as exchanging XML-based text documents, parsing XML data that has been queried from a database, or packaging complex transactions to send over Web Services, the .NET Compact Framework has a rich set of XML services to help you.
Using existing .NET Framework code
Another reason to adopt the .NET Compact Framework is if you have desktop .NET Framework code. The .NET Compact Framework is a rich subset of the desktop .NET Framework, and the key elements are the same, including namespaces, classes, method names, property names, and data types.
What you will find, however, is that the .NET Compact Framework is a much smaller library than the desktop .NET Framework. Comparing just the binary files, the desktop .NET Framework is approximately 30 megabytes, while the .NET Compact Framework is 1.5 megabytes. It represents a subset of the desktop .NET Framework that is finely tuned for both size and performance.
Connecting Win32 and .NET Compact Framework Code
If you take the approach that I suggest—mixing .NET Compact Framework and Win32 code—you need to how to connect the two types of code together. This satisfies a broad design goal of .NET: interoperability. In the terminology of .NET, Win32 code is unmanaged code because the .NET Common Language Runtime (CLR) execution engine does not manage Win32 code. It does not collect garbage, handle exceptions, provide security, or have access to any metadata for Win32 code. By contrast, the CLR does all these things, and more, for managed code.
Calling Win32 Functions
The .NET Compact Framework supports a subset of the interoperability of the desktop .NET Framework. In particular, you can make function calls into Win32 DLLs, but you cannot call COM interfaces (a feature known as COM Interop).
The ability to call Win32 DLLs is known as "Platform Invoke." Platform Invoke involves adding a declaration to a function in a .NET Compact Framework class. For example, here is a declaration that lets you call the
MessageBox function in the system's COREDLL.DLL library. (The 'W' at the end of the function in this declaration indicates that strings in this function accept the "Wide"—meaning Unicode—character set.)
[C#] [System.Runtime.InteropServices.DllImport("coredll.dll")] public static extern int MessageBoxW(int hWnd, String text, String caption, uint type); [VB.NET] <System.Runtime.InteropServices.DllImport("coredll.dll", _ SetLastError:=False)> _ Public Shared Function MessageBoxW(ByVal hWnd As Integer, _ ByVal txt As String, ByVal caption As String, _ ByVal Typ As Integer) As Integer End Function
Now, you can call the function as if it were a regular .NET Compact Framework function. Here are examples of calling this function from both C# and VB.NET:
As this example shows, you do not need to build your own Win32 DLLs—the system provides many DLLs and many functions in each DLL that you can call. If you decide that you want to build your own DLLs, you should keep a few points in mind:
- Only EXPORTED functions can be accessed. To export a function you can either:
- Use an EXPORTS statement in a module definition (.DEF) file, or
- Use the __declspec(dllexport) compiler keyword.
- Be sure to disable C++ function name mangling; otherwise you will not be able to find your function. The easiest way to do this involves using the extern "C" keyword (see example, below). Use a tool like DEPENDS.EXE to check that your DLL exports all the functions that need exporting.
- Select the __stdcall calling convention, which is the default used by the .NET Compact Framework to call into Win32 functions (see example, below). This limits your functions to a fixed number of parameters, but in doing so it pushes the work of stack cleanup into the called function, resulting in smaller, faster code. (To get support for a variable number of parameters, use the __cdecl keyword, and then modify the .NET Compact Framework function declaration to match.)
Here is an example of these three points implemented in
MyMessageBox, my version of the Win32
MessageBox function:
This simple example shows how you can pass simple data types—integers and strings—from the .NET Compact Framework into Win32. You can also pass other data types, in particular structures, from .NET Compact Framework code into Win32.
.NET Compact Framework interoperability only supports calls into Win32 libraries. It does not support calls from Win32 into the .NET Compact Framework. But you can communicate between Win32 and .NET Compact Framework using a mechanism that is familiar to Win32 programmers: windowing messages.
MessageWindow
The .NET Compact Framework team created the
MessageWindow class, which resides in the
Microsoft.WindowsCE.Forms namespace, as a way to communicate between Win32 code and .NET Compact Framework code.
MessageWindow is a wrapper around a Win32 window. Any Win32 program can send a Win32 message to this object using regular Win32 message-sending functions—
PostMessage or
SendMessage.
A .NET Compact Framework program that uses
MessageWindow must create a window procedure. Unlike a regular Win32 window procedure, the one you create for
MessageWindow resides in the .NET Compact Framework application or library. After a window has been created, the .NET Compact Framework must send the window handle to the Win32 code. This can be accomplished either by using a Platform Invoke function call, or by sending a message to another Win32 window.
MessageWindow lets a Win32 application pass three 32-bit values to a .NET Compact Framework application: the message value, a wParam, and an lParam. For some purposes, this might be enough.
Conclusion
Windows CE .NET supports the Win32 API, and it will support the .NET Compact Framework as soon as it is released. In some situations—for low-level driver code, operating system extensions, legacy code, or headless services—using the Win32 API makes sense. In most other cases, a combination of .NET Compact Framework code and Win32 will serve you well. Stretch the .NET Compact Framework as far as it will take you, then use the Platform Invoke and
MessageWindow support when you need the help of the underlying Win32 libraries. | https://msdn.microsoft.com/en-us/library/ms836774.aspx | CC-MAIN-2017-09 | refinedweb | 5,742 | 57.16 |
On 02/26/2013 01:28 PM, LRN wrote: > I'm looking at gnunet-fs-gtk core at the moment, and i don't get it. > > It is my understanding that 'next_id' is simply an identifier, and its > presence in metadata indicates that GNUnet client should do a search > for that identifier, and any results should be considered 'updated' > versions of the original. Fairly straightforward. > > add_updateable_to_ts() inserts "last_id" value it gets from the > iterator as PSEUDONYM_MC_LAST_ID, while setting PSEUDONYM_MC_NEXT_ID > to an empty string. > > GNUNET_GTK_master_publish_dialog_execute_button_clicked_cb() gets > PSEUDONYM_MC_LAST_ID and PSEUDONYM_MC_NEXT_ID and uses them as > "identifier" and "update identifier" respectively. > > When populating the treeview, PSEUDONYM_MC_CURRENT_ID_EDITABLE is set > to FALSE for all updateable items, it seems, and TRUE for all "empty" > rows (for new publications that do not update anything). > PSEUDONYM_MC_NEXT_ID_EDITABLE is always TRUE, no matter what. > > But. GNUNET_FS_namespace_list_updateable () passes nsn->id as a second > callback argument (which becomes "last_id"), and nsn->update as a last > callback argument (which becomes "next_id"). > > So. fs-gtk pseudonym threeview seems to have four different item types: > 1) "Blanks" - items for new publications, where you can edit both 'id' > and 'next_id' Right. > 2) "Leaves" - items for updates, where 'id' is frozen with the value > of 'next_id' from a previous publication, and 'next_id' is editable. Right, so here you can put in an update (generation X+1) and specify the identifier for generation X+2. > 3) "Stems" - items that were inserted during the updateable items > graph walking. Their 'id' is frozen with the value of 'next_id' from a > previous publication, and 'next_id' is editable. They differ from the > leaves in that there already were updates to these items. Updating > them again will sprout more leaves (creating ambiguity, as one stem > will now have >1 possible updates instead of 0 or 1 updates). Right. > 4) "Root" - initial publication that started the update graph. Its > 'id' is frozen with the value of 'id' (!) from the initial > publication, and 'next_id' is editable. Making "updates" with this > item will, in fact, produce a new publication with the same identifier > (thus increasing ambiguity, as one identifier will now yield multiple > items), and not update anything. Not quite. The issue is that there kind-of is no root. Theoretically the update graph can have cycles (update for A is B and update for B is A). So the code goes through some pain to try to find a "nice" way to represent the (potentially) cyclic graph in a tree structure. That of course is a mess, but as we technically cannot prevent a cyclic construction (as we may only have a partial view of the namespace), I somehow had to deal with this issue. So the 'root' is not the 'initial' publication. You could, in fact, post some item with identifier B to be updated by C first, and then LATER post another item with identifier A to be updated by B. The tree structure is supposed to represent the resulting directed graph as best as possible. Additionally, even if the directed graph is acyclic, it can have _multiple_ "roots" in the tree (update for A is C, update for B is C). > IMO (4) should not be in the list at all, why does > add_updateable_to_ts() add it? And the need of (3) is debatable as > well, since they break linearity of the update graph. If (3) is to be > offered, it should at least be indicated appropriately (to think of > it, (4) may remain in the tree as well, it just has to be unusable for > publications). > Do we _strive_ for linear update graphs at all? The goal was to coax users into making an update TREE (not linear), but to make it "easy" to construct and keep a tree structure, while still being able to somehow capture the mess you get if the user somehow specified some arbitrary directed graph instead. Linear was never the intent here; however, as from our other discussions I might still be happy if a future version of the GUI focused on simple linear update graphs; the question then is how we deal with non-linear constructions --- do we simply "forbid" managing those within the GUI? Is the entire namespace ONE linear update list, or do we allow multiple lines? Or keep trees and force a forest? Again, libgnunetfs allows much, and we might want a more restricted version for the GUI. > It'd prefer to encourage users to maintain linear update graphs. > If someone wants branching graphs, we should offer a special widget > for that (you ever seen git commit trees in tutotirals? that's how > that special widget should look like, roughly). Yes, having a special widget to represent the directed graph would also work, but hacking that up is a major nightmare, and I don't see that happen anytime soon. Additionally, the question is if this wouldn't be another one of those cases where we write a ton of code that then is useful for 0.0001% of the users that understand AND need it. At this point, I'd rather like to see a simplistic GUI for namespaces that may not allow much but that users do understand and sometimes use, rather than heavy work on custom widgets for complex operations few will ever understand or need. So for me the question is more if we simplify to linear, to trees, keep the current pseudo-tree (tree representing a directed graph) and/or how we make whatever we choose to do _easy_ to understand. I'm very open to suggestions (or patches ;-)). Happy hacking! Christian | https://lists.gnu.org/archive/html/gnunet-developers/2013-03/msg00000.html | CC-MAIN-2020-50 | refinedweb | 923 | 59.13 |
C library function - time()
Advertisements
Description
The C library function time_t time(time_t *seconds) returns the time since the Epoch (00:00:00 UTC, January 1, 1970), measured in seconds. If seconds is not NULL, the return value is also stored in variable seconds.
Declaration
Following is the declaration for time() function.
time_t time(time_t *t)
Parameters
seconds -- This is the pointer to an object of type time_t, where the seconds value will be stored.
Return Value
The current calendar time as a time_t object.
Example
The following example shows the usage of time() function.
#include <stdio.h> #include <time.h> int main () { time_t seconds; seconds = time(NULL); printf("Hours since January 1, 1970 = %ld\n", seconds/3600); return(0); }
Let us compile and run the above program, this will produce the following result:
Hours since January 1, 1970 = 373711 | http://www.tutorialspoint.com/c_standard_library/c_function_time.htm | CC-MAIN-2014-41 | refinedweb | 140 | 55.13 |
Welcome to the Parallax Discussion Forums, sign-up to participate.
Andy Lindsay (Parallax) wrote: »
Have you connected the I/O pins to the motor controller input where the hobby RF receiver output would normally go? If not, a detailed description of your setup would help.
joecooldeejay wrote: ».
' {$STAMP BS2}
' {$PBASIC 2.5}
pulse VAR Word
maneuver VAR Word
FREQOUT 4, 2000, 3000
DEBUG "8 = forward", CR,
"2 = backward", CR,
"4 = rotate left", CR,
"6 = rotate right", CR,
"5 = stay still"
DO
SERIN 16, 84, 20, Timeout, [maneuver]
Timeout:
LOOKDOWN maneuver, ["5", "8", "2", "4", "6"], maneuver
ON maneuver GOSUB StayStill, Forward, Backward, Left, Right
LOOP
StayStill:
INPUT 15
INPUT 14
INPUT 13
INPUT 12
RETURN
Forward:
HIGH 15
INPUT 14
HIGH 13
INPUT 12
RETURN
Backward:
INPUT 15
HIGH 14
INPUT 13
HIGH 12
RETURN
Left:
INPUT 15
HIGH 14
HIGH 13
INPUT 12
RETURN
Right:
HIGH 15
INPUT 14
INPUT 13
HIGH 12
RETURN
landonmay13 wrote: »
Does this work with the ActivityBot? If not, can you make one for it?
Duane Degn wrote: »?
/*
Keyboard Controlled ActivityBot.c
*/
#include "simpletools.h" // Library includes
#include "abdrive.h"
terminal *term; // For full duplex serial terminal
char c = 0; // Stores character input
int main() // Main function
{
simpleterm_close(); // Close default same-cog terminal
term = fdserial_open(31, 30, 0, 115200); // Set up other cog for terminal
drive_speed(0, 0); // Start drive system at 0 speed
// Display user instructions and prompt.
dprint(term, "Check Echo On in SimpleIDE Terminal\n\n");
dprint(term, "f = Forward\nb = Backward\nl = Left\nr = Right\n\n>");
while(1) // Main loop
{
c = fdserial_rxTime(term, 50); // Get character from terminal
if(c == 'f') drive_speed(32, 32); // If 'f' then forward
if(c == 'b') drive_speed(-32, -32); // If 'b' then backward
if(c == 'l') drive_speed(-32, 32); // If 'l' then left
if(c == 'r') drive_speed(32, -32); // If 'r' then right
if(c == 's') drive_speed(0, 0); // If 's' then stop
}
}
Thanks a lot.
Product page:
Downloads & Resources
The simplest way to add keyboard control of the turret servo would be to duplicate elements of the wheel control servo code for a third turret servo. For example, you could start by adding a couple of variables for turret character and turret pulse control. Something like this:
turretChar VAR Byte
turretPulse VAR Word
Then, add a second SERIN command to get a second character for turret direction to the DO...LOOP:
SERIN 16, 84, 20, TimeoutTurret, [turretChar]
TimeoutTurret:
Inside the Go subroutine, you'd probably want to add something like this (assuming turret servo is connected to P14):
LOOKDOWN turretChar, ["4", "7", "8", "9", "6"], turretChar
LOOKUP turretChar, [1100, 925, 750, 550, 350], turretPulse
PULSOUT 14, turretPulse
For more information on standard servo control, download "What's a Microcontroller? v3.0 (.pdf)" from the resource page, and check out Chapter 4.
For more information on Boe-Bot continuous rotation servo control, download "Robotics with the Boe-Bot Text v3.0 (.pdf)" from the resource page and check Chapter 2, Activity #4, #6, and Chapter 4.
I'm wondering if a microphone on a PropBot could hear a note from the piano and react to the pitch. What a set of possibilities for integration of music and motion. I'll start digging through the Objects Exchange. Remember the video of the Hokey Pokey Toddler Dance Team from five years ago?
If you'd like to be able to do more, I'd recommend going to and downloading the What's a Microcontroller PDF. Each chapter has a few activities, and each activity takes about 15 minutes to work through. You can skip the 5th activity in chapters 2 and 3. By the time you are through with Chapter 4, you'll be able to do it on your own, and you'll know all about the "fancy new code stuff" and lots more..
I am connecting the basic stamp homework board to the motor inputs on the snap rover. This is the snap rover.. I'm just using the yellow base with the motors in it.
It looks to me like you'll need to control the U8 motor control block with I/O pins and series resistors. I'd recommend this circuit:
Homework.......Series................U8 Motor
Board..............Resistor.............Control Block
P15
R=1k
LF
P14
R=1k
LB
P13
R=1k
RF
P12
R=1k
RB
With that circuit, this code should work. You might need to reduce the series resistor values from R=1k to R=470, but try it with 1k first.?
Index of my projects and interesting forum posts
Check:
for the current ActivityBot-based projects. The projects are updated regularly, so check back often. Drop them a line if you see something for the BS2 BOE-Bot, and are interested in doing the same with the ActivityBot.
Duane yes, can you help me understand how to control ActivityBot on macbook connected via USB?
The trick to this code is disconnecting the default serial terminal connection for print and scan calls and replacing it with one that runs in another cog. Why? Because the fdserial library has an rxTime function that only waits for a certain amount of time, so the code won't get stuck waiting for characters. Although not critical here, there are lots of cases where the ActivityBot will want to also scan for sensors.
There are lots of applications which can benefit from monitoring the serial line without blocking the flow of the program.
Thank you for the example.
Index of my projects and interesting forum posts
Thank you again! | http://forums.parallax.com/discussion/134398 | CC-MAIN-2019-35 | refinedweb | 924 | 60.45 |
First, I really like Bob DuCharme's article on RDFLib.
Those Python code snippets make me want to weep, after all the time I've spent in
Mozilla's cumbersome JavaScript binding to the RDF service. Equally, I like the three
code snippets he shows at the end, proving why RDF is more than XML: three ways
to express the model, and all three can get merged into one graph.
XML.com had two articles on the subject of generating form UIs from schema
definitions. The first, written last year by Chimezie Ogbuji from FourThought and titled Editing XML Data
Using XUpdate and HTML Forms, shows how to generate HTML form
elements from schema definitions using XSLT. The article shows not just creation
of new documents, but editing existing ones. For the latter, Chime also hints at
how to use XUpdate to find the nodes you changed, and communicate only the
changes to the server, who applies the xupdate in diff/patch fashion.
The second article, "Web-based XML Editing with W3C XML Schema and XSLT",
is more specific. Actually it is two articles, part one and part two.
How does the form machinery, called XSLGUI, grab data from the
existing instance document? "...the XSLGUI sets the name of each form element that it makes for each element equal to the XPath position of that element in the XML document". Here's what it looks like:
<input name="/person/phone[2]" value="0630458920"/>
The page
shows the full example of the XSLT for the form. The first article also shows
what an XUpdate document looks like. Part 2 of the series goes into
MetaXSLGUI, which is kind of like Formulator. Instead of writing
the form manually, you generate it using some hints.
I like MVC, I like interop, and I like to leverage industry trends, as do others. Thus, I'm interested
in ways to have multiple "VC" approaches to go with Zope 3's "M" and "VC". Particularly "VC" approaches
that run outside of the Zope 3 architecture, to truly show that Zope 3 isn't a closed
black box. You can indeed use other technologies with Zope 3 by means of
standard protocols.
On my OS X box here I have a prototype of such a beastie. It uses DAV to make the
remote "M" in Zope 3 a local DOM in IE 6 and Mozilla. Basically I have a local "M"
to go with the remote "M". As you navigate via DAV, you grab more data from
the remote "M" and shove it into the local "M", then use XSLT and JS to redraw the
screen. Which, since it happens in around 12 milliseconds, isn't noticeable to
the user. And since the much-maligned XSLT has kung foo powers that border
on the insane, you have a rich palette of options.
You already some huge performance wins. The screen isn't repainting all the time,
so the user has an immediate since of responsiveness. If they are returning to a
folder they've already visited, they don't have to go to the server. In fact, when
running as a Moz chrome app, you can have a tree with multiple servers.
But I'm really interested in the next steps. DAV gives me a rich metamodel based on
namespaces. When the author makes a change in the "VC", we don't do some weird
out-of-band form submission. We also don't do a remote procedure call, which
I've came out the closet this week and said I don't like. Instead, we just change
the local "M" and let it handle updating the remote "M" (using PROPPATCH, or MOVE,
or DELETE.)
We can also look at some of Jon Udell's ideas on tapping into XPath for navigation
purposes. I'm pretty convinced that trees are inadequate as the only navigation
path. Jon's articles provoke a number of ways to build new ways to move around
inside a pile of content.
And finally, these form ideas hint at solving one of the last problems: what if
the thing you want to edit isn't a document?
10:15:04 AM comment [
I wonder: would this make a Jython version of ZODB more likely? Don't know if
Jython is in sync with the CPython 2.3 yet.
9:31:50 AM comment [ | http://radio.weblogs.com/0116506/2003/07/18.html | crawl-002 | refinedweb | 730 | 71.24 |
Created on 2010-10-07 08:26 by francescor, last changed 2014-03-10 22:11 by python-dev. This issue is now closed.
Tested with version 3.2a2. Not tested on version 2.7.
The current implementation of functools.total_ordering generates a stack overflow because it implements the new comparison functions with inline operator, which the Python interpreter might reverse if "other" does not implement them. Reversing the comparison makes the interpreter call again the lambda function for comparison generating a stack overflow.
Run the attached test file for an example of this behavior.
Attached there is a solution of the problem, by implementing each comparison only with the class __xx__ and __eq__ operators.
Also in the file there is a complete test suite for it.
Thanks, this is a good idea.
Thanks for the report and patch.
Fixed. See r87853.
This also affects Python 2.7, where it hasn't been fixed. Maybe reopen it?
FWIW, I just tested svnmerging the revision, the patch applied with minor merge conflicts and the test suite passes.
Éric, would you like to apply this to 2.7?
New changeset 94c158199277 by Éric Araujo in branch '2.7':
Fix the total_ordering decorator to handle cross-type comparisons
This is not fixed. The accepted fix doesn't take NotImplemented into account, with the result that comparing two mutually-incomparable objects whose ordering operations were generated with total_ordering causes a stack overflow instead of the expected "TypeError: unorderable types: Foo() op Bar()".
I've attached a fix for this. It properly takes NotImplemented into account. It also generates __eq__ from __ne__ and vice versa if only one of them exists.
I'm not sure that we really care about handling NotImplemented (practicality beats purity). At some point, if someone writing a class wants complete control over the rich comparison methods, then they're going to have to write those methods.
But it seems pointless to force someone to implement all of the rich comparison methods when they may want to do something as simple as this:
class Foo:
...
def __lt__(self, other):
if not isinstance(other, Foo):
return NotImplemented
return self.some_value < other.some_value
It may seem pointless, but it takes less than a minute to do it and it would be both faster and clearer to do it manually. There's a limit to how much implicit code generation can or should be done automatically.
Also, I'm not too keen on the feature creep, or having the tool grow in complexity (making it harder to understand what it actually does). I would also be concerned about subtly changing the semantics for code that may already be using total_ordering -- the proposed change is probably harmless in most cases with only a minor performance hit, but it might break some code that currently works.
BTW, in Py3.x you get __ne__ for free whenever __eq__ is supplied.
Ok. I did write that against Python 2, so I wasn't aware of __eq__ and __ne__. I'll keep that in mind.
I am curious, however, as to how this could break existing code. It seems like code that relies on a stack overflow is already broken as it is.
> I am curious, however, as to how this could break existing code.
> It seems like code that relies on a stack overflow is already
> broken as it is.
Probably so. I worry about changes in semantics but it might be harmless.
We .
I'm attaching a file with the example classes returning NotImplemented, and a different implementation of a total ordering, as an example of how returning NotImplemented by one class will give the chance to the other class. This is the ultimate cause of the bug, and new_total_ordering handles it properly.
I've attached a file demonstrating the stack overflow. It assumes total_ordering has been defined as per new_total_ordering.py.
Ah!.
That's my point. My version, sane_total_ordering.py, fixes this by using traditional functions and explicit NotImplemented checks.
Yeah, I can't say it's pretty though. :) Anyway this is an issue for 3.2 and 2.7 as well, then, so I add them back.
Ok. Yeah, I won't argue that it's pretty :-)
I think the whole issue is indeed how NotImplemented is treated. To me saying that 'not NotImplemented' is True is wrong. About the stack overflow I found there are various possible fixes, however none will nice.
By definition, NotImplemented is the way that a method or operation have to signal to the interpreter that it doesn't know how to handle given operand types. IMHO, it shouldn't be possible to receive NotImplemented as operand value, and it shouldn't have a boolean value. Indeed, t should be handled as a special case by the interpreter.
To go further, I am not really sure that NotImplemented should be a return value. Probably, an exception that is trapped by the interpreter when evaluating an expression would be easier to define and handle.
Of course, such a change should be deeply grokked before being put in place, also because of the high impact on code that already relies on NotImplemented having a value.
I was also surprised by the special return value, but it seems a bit overkill to change the implementation of rich comparison operators just because it's tricky to make a short and pretty class decorator that extends some operators to all operators. :)
And removing the support for returning NotImplemented is something that only can be done at the earliest in 3.4 anyway.
On the one hand, it's not just a matter of total_ordering and rich comparison operators, because all user defined operators may return NotImplemented when they get types that they don't know how to handle.
On the other hand, if such a decision is taken, a long path should be planned to move handling of unknown types from one way to the other.
NotImplemented is a speed and maintainability hack - the runtime cost and additional code complexity involved in doing the same operator signalling via exceptions would be prohibitive (check Objects/abstract.c in the CPython source if you want the gory details).
As far as an implementation of @total_ordering that correctly handles NotImplemented goes, yes, I absolutely agree we should do this correctly. The fact that it is *hard* is an argument in *favour* of us getting it right, as there is a decent chance that manually written comparison operations will also stuff it up.
That said, I don't think sane_total_ordering quite gets the semantics right, either.
Some helper functions in the closure would let the existing lambda functions be updated to do the right thing (and I believe the semantics I have used below are the correct ones for handling NotImplemented in @total_ordering). (I haven't actually run this code as yet, but it should give a clear idea of what I mean)
def not_op(op, other):
# "not a < b" handles "a >= b"
# "not a <= b" handles "a > b"
# "not a >= b" handles "a < b"
# "not a > b" handles "a <= b"
op_result = op(other)
if op_result is NotImplemented:
return op_result
return not op_result
def op_or_eq(op, self, other):
# "a < b or a == b" handles "a <= b"
# "a > b or a == b" handles "a >= b"
op_result = op(other)
if op_result:
# Short circuit OR, as op is True
# NotImplemented is also passed back here
return op_result:
# Short circuit AND, as not_op is False
# NotImplemented is also passed back here
if op_result is NotImplemented:
return op_result
return not op_result
return self.__ne__(other)
def not_op_or_eq(op, self, other):
# "not a <= b or a == b" handles "a >= b"
# "not a >= b or a == b" handles "a <= b"
op_result = op(other)
if op_result is NotImplemented:
return op_result
if op_result:
return self.__eq__(other)
# Short circuit OR, as not_op is True
return not op_result
def op_and_not_eq(op, self, other):
# "a <= b and not a == b" handles "a < b"
# "a >= b and not a == b" handles "a > b"
op_result = op(other)
if op_result is NotImplemented:
return op_result
if op_result:
return self.__ne__(other)
# Short circuit AND, as op is False
return op_result
The conversion table then looks like:
convert = {
'__lt__': [
('__gt__',
lambda self, other: not_op_and_not_eq(self.__lt__, self, other)),
('__le__',
lambda self, other: op_or_eq(self.__lt__, self, other)),
('__ge__',
lambda self, other: not_op(self.__lt__, other))
],
'__le__': [
('__ge__',
lambda self, other: not_op_or_eq(self.__le__, self, other)),
('__lt__',
lambda self, other: op_and_not_eq(self.__le__, self, other)),
('__gt__',
lambda self, other: not_op(self.__le__, other))
],
'__gt__': [
('__lt__',
lambda self, other: not_op_and_not_eq(self.__gt__, self, other)),
('__ge__',
lambda self, other: op_or_eq(self.__gt__, self, other)),
('__le__',
lambda self, other: not_op(self.__gt__, other))
],
'__ge__': [
('__le__',
lambda self, other: not_op_or_eq(self.__ge__, self, other)),
('__gt__',
lambda self, other: op_and_not_eq(self.__ge__, self, other)),
('__lt__',
lambda self, other: not_op(self.__ge__, other))
]
}
Also, a note regarding efficiency: as it calls the underlying methods directly and avoids recursing through the full operand coercion machinery, I would actually expect this approach to run faster than the current implementation.
Changed stage and resolution to reflect the fact that none of the existing patches adequately address the problem.
I like Nick Coghlan's suggestion in msg140493, but I think he was giving up too soon in the "or" cases, and I think the confusion could be slightly reduced by some re-spellings around return values and comments about short-circuiting.
def not_op(op, other):
# "not a < b" handles "a >= b"
# "not a <= b" handles "a > b"
# "not a >= b" handles "a < b"
# "not a > b" handles "a <= b"
op_result = op(other)
if op_result is NotImplemented:
return NotImplemented
return not op_result
def op_or_eq(op, self, other):
# "a < b or a == b" handles "a <= b"
# "a > b or a == b" handles "a >= b"
op_result = op(other)
if op_result is NotImplemented
return self.__eq__(other) or NotImplemented
if op_result:
return True is NotImplemented:
return NotImplemented
if op_result:
return False
return self.__ne__(other)
def not_op_or_eq(op, self, other):
# "not a <= b or a == b" handles "a >= b"
# "not a >= b or a == b" handles "a <= b"
op_result = op(other)
if op_result is NotImplemented:
return self.__eq__(other) or NotImplemented
if op_result:
return self.__eq__(other)
return True
def op_and_not_eq(op, self, other):
# "a <= b and not a == b" handles "a < b"
# "a >= b and not a == b" handles "a > b"
op_result = op(other)
if op_result is NotImplemented:
return NotImplemented
if op_result:
return self.__ne__(other)
return False
Raymond, one of the devs here at the PyCon AU sprints has been looking into providing an updated patch for this. Do you mind if I reassign the issue to myself to review their patch (once it is uploaded)?
Attaching.
As part of this, I finally reviewed Jim's proposed alternate implementations for the helper functions. Katie's patch used my version while I figured out the differences in behaviour :)
The key difference between them relates to the following different approaches to handling unknown types in __eq__:
@functools.total_ordering
class TotallyOrderedEqualsReturnsFalse:
def __init__(self, value):
self._value = value
def __eq__(self, other):
return isinstance(other, Weird) and self._value == other._value
def __lt__(self, other):
if not isinstance(other, Weird): return NotImplemented
return self._value < other._value
@functools.total_ordering
class TotallyOrderedEqualsReturnsNotImplemented:
def __init__(self, value):
self._value = value
def __eq__(self, other):
if not isinstance(other, Weird): return NotImplemented
return self._value == other._value
def __lt__.
In practice, lots of types are written that way, so we need to preserve the current behaviour of not checking the equality operations if the ordered comparison isn't implemented, or we will inadvertently end up making "<=" or ">=" return an answer instead of raising TypeError.
On Mon, Jul 8, 2013 at 3:30 AM, Nick Coghlan wrote:
> The key difference between them relates to the following different approaches to handling unknown types in __eq__:
> @functools.total_ordering
> class TotallyOrderedEqualsReturnsFalse:
...
> def __eq__(self, other):
> return isinstance(other, Weird) and self._value == other._value
> @functools.total_ordering
> class TotallyOrderedEqualsReturnsNotImplemented:
...
> def __eq__.
I had not considered this. I'm not sure exactly where to improve the
docs, but I think it would be helpful to use a docstring (or at least
comments) on the test cases, so that at least someone looking at the
exact test cases will understand the subtlety.
> In practice, lots of types are written that way, so we need to preserve the current behaviour of not checking the equality operations if the ordered comparison isn't implemented, or we will inadvertently end up making "<=" or ">=" return an answer instead of raising TypeError.
I had viewed that as a feature; for types where only some values will
have a useful answer, I had thought it better to still return that
answer for the values that do have one. I freely acknowledge that
others may disagree, and if you say the issue was already settled,
then that also matters.
-jJ
I'm actually not sure which of us is correct - Katie and I will be looking into it further today to compare the existing implementation, my proposal and yours to see if there's a clear winner in terms of consistent.
It may be that we end up choosing the version that pushes towards more correct behaviour, since types incorrectly returning True or False from comparisons (instead of NotImplemented) is actually a pretty common bug preventing the creation of unrelated types that interoperate cleanly with an existing type.
OK,.
Attached.
Nick, let me know when you think it is ready and I'll review the patch.
I.
> Since this is such an incredibly niche edge case
> (the ordered comparison has to return NotImplemented
> while __eq__ returns True),
*and* the types are explicitly supposed to ordered,
based on what is being tested
> I remaining consistent with the existing behaviour
> is the most important consideration.
Agreed, once I consider that additional caveat.
After more thought, I'm changing this to Py3.4 only. For prior versions, I'm content to document that there is no support for NotImplemented, and if that is needed, then people should write-out all six rich comparisons without using the total ordering decorator.
I don't think it is a good idea to introduce the new behaviors into otherwise stable point releases. This patch is somewhat complex and has potential for bugs, unexpected behaviors, misunderstandings, and intra-version compatability issues (like the problems that occurred when True and False were added in a point release many years ago).
Agreed.
I had actually assumed this would be 3.4 only, otherwise I wouldn't have
suggested using the new subtest feature in the test case.
Nick.
Thanks Katie - Raymond, the patch is ready for review now
If you're happy with it, then the only other things it should need prior to commit are NEWS and ACKS entries (I think it's too esoteric a fix to mention in What's New).
Hello, I have run into this when I wanted to use OrderedEnum and the example in enum docs seemed too repetitive to me. It's nice to know that it's being worked on.
Raymond, do you still want to look at this one? Otherwise I'll finish it up
and commit it before the next alpha (I'll check the example in the enum
docs to see if it can be simplified, too).
Updated patch that includes the simplified OrderedEnum example in the enum docs and also updates the enum tests to check that type errors are correctly raised between different kinds of ordered enum.
Raymond, I'm happy to leave this until alpha 4, but I'm raising the priority a bit since I think the inclusion of Enum in the standard library increases the chances of people wanting to use functools.total_ordering to avoid writing out the comparison methods in situations where incompatible types may encounter each other.
Nick,.
One other thought: The OrderedEnum example should not use the total ordering decorator.
To the extent that the docs are trying to teach how to use Enum, they should focus on that task and not make a side-trip into the world of class decorators. And to the extent that the docs are trying to show an example of production code, it would be better for speed and ease of tracing through a debugger to just define all four ordering comparisons.
New changeset ad9f207645ab by Nick Coghlan in branch 'default':
Close #10042: functools.total_ordering now handles NotImplemented
The committed patched was based directly on Katie's last version, without my enum changes.
Raymond - feel free to tweak the wording on the docs notes or the explanatory comment if you see anything that could be improved.
Thanks Nick and Katie. This looks great. :-)
New changeset 1cc413874631 by R David Murray in branch 'default':
whatsnew: total_ordering supports NotImplemented (#10042) | https://bugs.python.org/issue10042 | CC-MAIN-2021-17 | refinedweb | 2,795 | 52.49 |
Introduction to React Hooks
React hooks are something that everyone uses at the moment. A nice feature that no one seems to understand what they are. Each time I ask someone what are hooks, usually response is they are useState and useEffect functions. Those are examples of hooks, but it doesn’t answer what they are. And in this post, I am trying to simplify what they are.
Background
When I started working with React, I loved writing class components. Even when functional components started becoming popular, I still kept writing classes. But they do suffer from some critical issues. Class components are harder to minimize. There is much boilerplate code, and don’t get me even start on the issue with this keyword. Functional components are much cleaner, but there was a problem of access to the state and lifestyle components. It is where React hooks come. They enable access to React features from function components. And the following are two examples of hooks you get with React and how they make your life easier.
State hook
For accessing to component state, there is the useState hook. With the class components, you would need to use a whole set of lifecycle methods. First, you would need a constructor to set up the initial state, then making functions to update the state. Each time you create those functions, there is an issue with this. I still haven’t met junior, who didn’t ask me what this is when they see the bind method. Well, many more senior developers often ask the same question. With useState, it is only one function. You pass its initial state. As a return, you get a variable containing a value and a function you can use to update it. Executing this function also triggers the re-render of the component—much cleaner and more straightforward code.
Class component with one state variable:
import React, {PureComponent} from 'react'; class Counter extends PureComponent { constructor(props) { super(props); this.state = { counter: 0 } this.increment = this.increment.bind(this) } increment() { this.setState({counter: this.state.counter + 1}) } render() { return (<div> <div>{this.state.counter}</div> <div> <button onClick={this.increment}>Increment</button> </div> </div>) } }
Function component with hooks:
import React, {useState} from 'react'; function Counter() { const [counter, setCounter] = useState(0); const incrementCounter = () => setCounter(counter + 1) return ( <div> <div>{counter}</div> <div> <button onClick={incrementCounter}>Increment</button> </div> </div> ) }
Side effects hook
Saying side effects hook doesn’t mean much. But when you have a task you want to execute after the component mounted, this is the hook you want to use. Maybe you want to register some event listeners, subscribe to API, unsubscribe from API, or any other action that should not be inside the main component body. This hook is the place to do it. And suppose you are coming from a class component background. In that case, if you had some actions in componentDidMount, componentWillUnmount, and similar function, there is a high chance it is a place for useEffect hook. This hook needs at least one argument a function to execute. And if you have some cleanup action to do, this function can also return a function that would do that.
import React, {useEffect} from 'react'; function Chat() { useEffect(() => { ChatAPI.subscribe(); return () => { ChatAPI.unsubscribe(); } }); return <div>Chat</div> }
There are more things about both useState and useEffect hook to explain. One of them is executing useEffect conditionally. But that is not the topic of this post. The goal is to show how they enable access to React features and in a much cleaner way.
Pros and cons, but mainly pros
The only real limitation of hooks is that they need to be unconditional. That means you can’t put them inside of an if statement, and you can’t create a state at a later point in time. Also, many developers place way too much logic inside of it. That is not needed as you can have multiple of the same hook. With the class component, you would need to place registering all APIs into componentDidMount or similar. With hooks, you can have multiple useEffect hooks in the same component—one for each task you want to perform. And last, since hooks are just functions, they are much easier to exclude and reuse. Export it and import where it is needed. That means cleaner, more reusable, and more straightforward to test code. No one can object to that.
Wrap up
Hooks don’t introduce some new functionality you couldn’t do with class components. They give you a new and clean way to do it. Above, I used two hooks to illustrate that, but there are other hooks, and you can create your own. You can find a full list of built-in hooks on the React documentation page. There are many other resources on hooks. An excellent source for understanding is the React Conf talk by Sophie Alpert, Dan Abramov, and Ryan Florence. But I also suggest the Codú Community channel where Niall has a whole series of videos on hooks. You can watch the introduction one here.
For more, you can follow me on Twitter, LinkedIn, GitHub, or Instagram. | https://kristijan-pajtasev.hashnode.dev/introduction-to-react-hooks | CC-MAIN-2021-43 | refinedweb | 865 | 66.44 |
When.
In my case this truly horrendous piece:
Thanks to a wide array of open source tools and libraries available its possible to create something less fantastical but somewhat functional within an afternoon.
The rough plan
First things first — find some Facial-Recognition-as-a-Service. The brilliant Kairos offers a free and disconcerting API which provides a barrage of information once fed a url of an image:
Returned JSON data (some image info removed for length)
{"images":
[{"faces":
[{"attributes":
{"age":40,
"asian":0.00539,
"black":0.00032,
"gender":{
"femaleConfidence":0.00002
"maleConfidence":0.99998,
"type":"M"},
"glasses":"None",
"hispanic":0.04204,
"lips":"Apart",
"other":0.01249,
"white":0.93976}
}]
}]
}
Kairos also allows labeled images to be enrolled into a database, and when unlabelled images are passed to the API it will return the label of most similar image in the database.
While this is designed to identify people who have previously been enrolled, it can perform the art recognition task pretty respectfully (when its similarity threshold is set low enough).
To find suitable images to enrol I mined the very service I am trying to imitate, scraping the urls of artworks from the Google Arts and Culture website. If I were to invest more than four hours into this I may take the time to label each artwork with a unique id tied to a database of images, artwork names and artists to provide a comprehensive and effortless user experience.
But I’m not.
So the labels for each artwork is simply its url, so the user can be redirected to their matched artwork.
from flask import Flask, redirect
#Some function here, get a suitable image url as label
return redirect(label)
Server and hosting
Flask offers a simple option to serve a webpage to a user while using python to handle the image IO and processing. Not wanting to spend too long wrestling with hosting, I simply ran the site locally.
Kairos requires a public image url as its input, so to expose my site beyond the local network Ngrok provides a secure tunnel to my localhost.
The next challenge is to process the user input and API response while continuing to serve the site to the user. To handle the asynchronous events the threading library can be used to seperate the two processes.
Finally it’s time to feed the website with selfies from friends and family, as well as some more generic individuals, the results of which are shown below!
| https://hackernoon.com/building-googles-art-and-culture-portrait-matcher-8abc040e9a10 | CC-MAIN-2020-10 | refinedweb | 414 | 58.01 |
This HTML version of is provided for convenience, but it
is not the best format for the book. In particular, some of the
symbols are not rendered correctly.
You might prefer to read
the PDF version, or
you can buy a hardcopy from
Amazon.
The thesis of this book is that data combined with practical
methods can answer questions and guide decisions under uncertainty.
As an example, I present.” See. use data from Cycle 6, which was conducted from
January 2002 to March 2003.
The goal of the survey is to draw conclusions about a population; the target population of the NSFG is people in the
United States aged 15-44. Ideally surveys would collect data from
every member of the population, but that’s seldom possible. Instead
we collect data from a subset of the population called a sample.
The people who participate in a survey are called respondents.
In general,
cross-sectional studies are meant to be representative, which
means that every member of the target population has an equal chance
of participating., in order.
When working with this kind of data, it is important to be familiar
with the codebook, which documents the design of the study, the
survey questions, and the encoding of the responses. The codebook and
user’s guide for the NSFG data are available from
The code and data used in this book are available from. For information
about downloading and working with this code,
see Section 0.2.
Once you download the code, you should have a file called ThinkStats2/code/nsfg.py. If you run it, it should read a data
file, run some tests, and print a message like, “All tests passed.”
Let’s see what it does. Pregnancy data from Cycle 6 of the NSFG is in
a file called 2002FemPreg.dat.gz; it
is a gzip-compressed data file in plain text (ASCII), with fixed width
columns. Each line in the file is a record that
contains data about one pregnancy.
The format of the file is documented in 2002FemPreg.dct, which
is a Stata dictionary file. Stata is a statistical software system;
a “dictionary” in this context is a list of variable names, types,
and indices that identify where in each line to find each variable.
For example, here are a few lines from 2002FemPreg.dct:
infile dictionary {
_column(1) str12 caseid %12s "RESPONDENT ID NUMBER"
_column(13) byte pregordr %2f "PREGNANCY ORDER (NUMBER)"
}
This dictionary describes two variables: caseid is a 12-character
string that represents the respondent ID; pregorder is a
one-byte integer that indicates which pregnancy this record
describes for this respondent.
The code you downloaded includes thinkstats2.py, which is a Python
module
that contains many classes and functions used in this book,
including functions that read the Stata dictionary and
the NSFG data file. Here’s how they are used in nsfg.py:
def ReadFemPreg(dct_file='2002FemPreg.dct',
dat_file='2002FemPreg.dat.gz'):
dct = thinkstats2.ReadStataDct(dct_file)
df = dct.ReadFixedWidth(dat_file, compression='gzip')
CleanFemPreg(df)
return df
ReadStataDct takes the name of the dictionary file
and returns dct, a FixedWidthVariables object that contains the
information from the dictionary file. dct provides ReadFixedWidth, which reads the data file.
The result of ReadFixedWidth is a DataFrame, which is the
fundamental data structure provided by pandas, which is a Python
data and statistics package we’ll use throughout this book.
A DataFrame contains a
row for each record, in this case one row per pregnancy, and a column
for each variable.
In addition to the data, a DataFrame also contains the variable
names and their types, and it provides methods for accessing and modifying
the data.
If you print df you get a truncated view of the rows and
columns, and the shape of the DataFrame, which is 13593
rows/records and 244 columns/variables.
>>> import nsfg
>>> df = nsfg.ReadFemPreg()
>>> df
...
[13593 rows x 244 columns]
The DataFrame is too big to display, so the output is truncated. The
last line reports the number of rows and columns.
The attribute columns returns a sequence of column
names as Unicode strings:
>>> df.columns
Index([u'caseid', u'pregordr', u'howpreg_n', u'howpreg_p', ... ])
The result is an Index, which is another pandas data structure.
We’ll learn more about Index later, but for
now we’ll treat it like a list:
>>> df.columns[1]
'pregordr'
To access a column from a DataFrame, you can use the column
name as a key:
>>> pregordr = df['pregordr']
>>> type(pregordr)
<class 'pandas.core.series.Series'>
The result is a Series, yet another pandas data structure.
A Series is like a Python list with some additional features.
When you print a Series, you get the indices and the
corresponding values:
>>> pregordr
0 1
1 2
2 1
3 2
...
13590 3
13591 4
13592 5
Name: pregordr, Length: 13593, dtype: int64
In this example the indices are integers from 0 to 13592, but in
general they can be any sortable type. The elements
are also integers, but they can be any type.
The last line includes the variable name, Series length, and data type;
int64 is one of the types provided by NumPy. If you run
this example on a 32-bit machine you might see int32.
You can access the elements of a Series using integer indices
and slices:
>>> pregordr[0]
1
>>> pregordr[2:5]
2 1
3 2
4 3
Name: pregordr, dtype: int64
The result of the index operator is an int64; the
result of the slice is another Series.
You can also access the columns of a DataFrame using dot notation:
>>> pregordr = df.pregordr
This notation only works if the column name is a valid Python
identifier, so it has to begin with a letter, can’t contain spaces, etc.
We have already seen two variables in the NSFG dataset, caseid
and pregordr, and we have seen that there are 244 variables in
total. For the explorations in this book, I use the following
variables:
birthwgt_lb
birthwgt_oz
If you read the codebook carefully, you will see that many of the
variables are recodes, which means that they are not part of the
raw data collected by the survey; they are calculated using
the raw data.
For example, prglngth
when they are available, unless there is a compelling reason to
process the raw data yourself.
When you import data like this, you often have to check for errors,
deal with special values, convert data into different formats, and
perform calculations. These operations are called data cleaning.
nsfg.py includes CleanFemPreg, a function that cleans
the variables I am planning to use.
def CleanFemPreg(df):
df.agepreg /= 100.0
na_vals = [97, 98, 99]
df.birthwgt_lb.replace(na_vals, np.nan, inplace=True)
df.birthwgt_oz.replace(na_vals, np.nan, inplace=True)
df['totalwgt_lb'] = df.birthwgt_lb + df.birthwgt_oz / 16.0
agepreg contains the mother’s age at the end of the
pregnancy. In the data file, agepreg is encoded as an integer
number of centiyears. So the first line divides each element
of agepreg by 100, yielding a floating-point value in
years.
birthwgt_lb and birthwgt_oz contain the weight of the
baby, in pounds and ounces, for pregnancies that end in live birth.
In addition it uses several special codes:
97 NOT ASCERTAINED
98 REFUSED
99 DON'T KNOW
Special values encoded as numbers are dangerous because if they
are not handled properly, they can generate bogus results, like
a 99-pound baby. The replace method replaces these values with
np.nan, a special floating-point value that represents “not a
number.” The inplace flag tells replace to modify the
existing Series rather than create a new one.
As part of the IEEE floating-point standard, all mathematical
operations return nan if either argument is nan:
>>> import numpy as np
>>> np.nan / 100.0
nan
So computations with nan tend to do the right thing, and most
pandas functions handle nan appropriately. But dealing with
missing data will be a recurring issue.
The last line of CleanFemPreg creates a new
column totalwgt_lb that combines pounds and ounces into
a single quantity, in pounds.
totalwgt_lb
One important note: when you add a new column to a DataFrame, you
must use dictionary syntax, like this
# CORRECT
df['totalwgt_lb'] = df.birthwgt_lb + df.birthwgt_oz / 16.0
Not dot notation, like this:
# WRONG!
df.totalwgt_lb = df.birthwgt_lb + df.birthwgt_oz / 16.0
The version with dot notation adds an attribute to the DataFrame
object, but that attribute is not treated as a new column.
When data is exported from one software environment and imported into
another, errors might be introduced. And when you are
getting familiar with a new dataset, you might interpret data
incorrectly or introduce other misunderstandings. If you take
time to validate the data, you can save time later and avoid errors.
One way to validate data is to compute basic statistics and compare
them with published results. For example, the NSFG codebook includes
tables that summarize each variable. Here is the table for
outcome, which encodes the outcome of each pregnancy:
value label Total
1 LIVE BIRTH 9148
2 INDUCED ABORTION 1862
3 STILLBIRTH 120
4 MISCARRIAGE 1921
5 ECTOPIC PREGNANCY 190
6 CURRENT PREGNANCY 352
The Series class provides a method, value_counts, that
counts the number of times each value appears. If we select the outcome Series from the DataFrame, we can use value_counts
to compare with the published data:
value_counts
>>> df.outcome.value_counts(sort=False)
1 9148
2 1862
3 120
4 1921
5 190
6 352
The result of value_counts is a Series;
sort=False doesn’t sort the Series by values, so them
appear in order.
sort=False
Comparing the results with the published table, it looks like the
values in outcome are correct. Similarly, here is the published
table for birthwgt_lb
value label Total
. INAPPLICABLE 4449
0-5 UNDER 6 POUNDS 1125
6 6 POUNDS 2223
7 7 POUNDS 3049
8 8 POUNDS 1889
9-95 9 POUNDS OR MORE 799
And here are the value counts:
>>> df.birthwgt_lb.value_counts(sort=False)
0 8
1 40
2 53
3 98
4 229
5 697
6 2223
7 3049
8 1889
9 623
10 132
11 26
12 10
13 3
14 3
15 1
51 1
The counts for 6, 7, and 8 pounds check out, and if you add
up the counts for 0-5 and 9-95, they check out, too. But
if you look more closely, you will notice one value that has to be
an error, a 51 pound baby!
To deal with this error, I added a line to CleanFemPreg:
df.loc[df.birthwgt_lb > 20, 'birthwgt_lb'] = np.nan
This statement replaces invalid values with np.nan.
The attribute loc provides several ways to select
rows and columns from a DataFrame. In this example, the
first expression in brackets is the row indexer; the second
expression selects the column.
The expression df.birthwgt_lb > 20 yields a Series of type
bool, where True indicates that the condition is true. When a
boolean Series is used as an index, it selects only the elements that
satisfy the condition.
df.birthwgt_lb > 20
To work with data effectively, you have to think on two levels at the
same time: the level of statistics and the level of context.
As an example, let’s look at the sequence of outcomes for a few
respondents. Because of the way the data files are organized, we have
to do some processing to collect the pregnancy data for each respondent.
Here’s a function that does that:
def MakePregMap(df):
d = defaultdict(list)
for index, caseid in df.caseid.iteritems():
d[caseid].append(index)
return d
df is the DataFrame with pregnancy data. The iteritems
method enumerates the index (row number)
and caseid for each pregnancy.
d is a dictionary that maps from each case ID to a list of
indices. If you are not familiar with defaultdict, it is in
the Python collections module.
Using d, we can look up a respondent and get the
indices of that respondent’s pregnancies.
This example looks up one respondent and prints a list of outcomes
for her pregnancies:
>>> caseid = 10229
>>> preg_map = nsfg.MakePregMap(df)
>>> indices = preg_map[caseid]
>>> df.outcome[indices].values
[4 4 4 4 4 4 1]
indices is the list of indices for pregnancies corresponding
to respondent 10229.
Using this list as an index into df.outcome selects the
indicated rows and yields a Series. Instead of printing the
whole Series, I selected the values attribute, which is
a NumPy array.
The outcome code 1 indicates a live birth. Code 4 indicates
a miscarriage; that is, a pregnancy that ended spontaneously, usually
with no known medical cause.
Statistically this respondent is not unusual. Miscarriages are common
and there are other respondents who reported as many or more.
But remembering the context, this data tells the story of a woman who
was pregnant six times, each time ending in miscarriage. Her seventh
and most recent pregnancy ended in a live birth. If we consider this
data with empathy, it is natural to be moved by the story it tells.
Each record in the NSFG dataset represents a person who provided
honest answers to many personal and difficult questions. We can use
this data to answer statistical questions about family life,
reproduction, and health. At the same time, we have an obligation
to consider the people represented by the data, and to afford them
respect and gratitude.
chap01ex.ipynb
$ ipython notebook &
If IPython is installed, it should launch a server that runs in the
background and open a browser to view the notebook. If you are not
familiar with IPython, I suggest you start at.
To launch the IPython notebook server, run:
It should open a new browser window, but if not, the startup
message provides a URL you can load in a browser, usually. The new window should list the notebooks
in the repository.
Open chap01ex.ipynb. Some cells are already filled in, and
you should execute them. Other cells give you instructions for
exercises you should try.
A solution to this exercise is in chap01soln.ipynb
chap01soln.ipynb
chap01ex.py
The variable pregnum is a recode that indicates how many
times each respondent has been pregnant. Print the value counts
for this variable and compare them to the published results in
the NSFG codebook.
You can also cross-validate the respondent and pregnancy files by
comparing pregnum for each respondent with the number of
records in the pregnancy file.
You can use nsfg.MakePregMap to make a dictionary that maps
from each caseid to a list of indices into the pregnancy
DataFrame.
A solution to this exercise is in chap01soln.py
chap01soln.py. Good places to start include,
and, and in the United Kingdom,.
Two of my favorite data sets are the General Social Survey at, and the European Social
Survey at.
If it seems like someone has already | http://greenteapress.com/thinkstats2/html/thinkstats2002.html | CC-MAIN-2017-47 | refinedweb | 2,486 | 63.19 |
SharePoint Client Object Model: Step One
I almost didn't make it out alive. I followed the instructions in every piece of sample code and every forum post by someone who had no idea why their client OM code wasn't working, and my code still wouldn't get past the page load. I kept getting "'Type' is undefined" errors when sp.core.js tried to register the SP namespace.
As it turns out, you need the help of the default master page (or one like it) to get the object model loaded. Once I told my sample page to use the default master and modified everything accordingly, it hooked up and ran just fine.
Now I can finally get some work done. | http://weblogs.asp.net/peterbrunone/sharepoint-client-object-model-step-one | CC-MAIN-2015-27 | refinedweb | 123 | 71.24 |
Style and Correctness Checkers
Java NotesStyle and Correctness Checkers
PMD
Description: PMD checks.../
FindBugs
Looks for bugs in Java code. Open-source, free....
URL: findbugs.sourceforge.net
Java Coding Standard Checker
From Cafe
Having problem with image upload....
Having problem with image upload.... I am uploading profile pictures... resource is not found on server.The same problem is occurring in case...;%@ page language="java" %>
<HTML>
<HEAD><TITLE>Display file
initialising a checkers board
initialising a checkers board Hi there
I am new to java and i am trying to place checkers pieces into a checker board which is set up as an array. What i have done so far is create an array and fill it with nothing. I have
Change background color of text box - Java Beginners
problem. Please try the code.
Change background Color
function...Change background color of text box Hi how can i change the background color to red of Javascript text box when ever user enters an incorrect value
GRADIENT BACKGROUND
GRADIENT BACKGROUND How to set gradient colors a s background for a jframe? pls help me..............
Javascript background color - Java Beginners
Javascript background color Hi
How to compare background color...; Hi Friend,
If you want to change the background color of the textbox, try the following code:
Change background Color
function check
java programming problem - Java Beginners
/java/java-tips/data/strings/96string_examples/example_count.shtml
http.../java-tips/data/strings/96string_examples/example_countVowels.shtml
follow...java programming problem Hello..could you please tell me how can I
Jbutton[] background & foregroundcolor change
Jbutton[] background & foregroundcolor change how to change the color of the selected JButton in java swing.
I am create JButton in an array... foreground and background color is changed. the remaining jbutton foreground
Problem in java 1.6 - Java Beginners
Problem in java 1.6 Am facing problem in java 1.6 . Ex. In a Frame...... Hi friend,
Give source code where you having the problem
For read more information on java visit to :
adding background image - Java Beginners
adding background image how do i add background image to this code... = new JTextField ("");
int average;
JPanel background = new JPanel...);
background.add(text11);
getContentPane().add(background
Synchronize behavior is having any method in java?
Synchronize behavior is having any method in java? Synchronize behavior is having any method in java?
static method not related instance static it is sharing single object across the multiple objects.Synchronized
for a problem in coading - Java Beginners
for a problem in coading what is the problm in following coading....having run time error-"exception in thread "main" java.lang.noclassdefFounderror...(String[] args)
{
mywindow ();
}
}
Hi Friend,
There is no problem
having problem with menu ans sub menu items css ...plz help!!
having problem with menu ans sub menu items css ...plz help!! PLZ help ...this is my html menu
> <div id="content"> <div
>... 1px 1px #fff;
}
.item ul li a:hover{
background-color:#424242;
color
problem
problem Hi,
what is java key words
Hi Friend,
Java Keywords are the reserved words that are used by the java compiler for specific... information, visit the following link:
Java Keywords
Thanks CLASSPATH PROBLEM
JAVA CLASSPATH PROBLEM hi all Friends
I am stuck using the java servlets and problem raise for classpath.
I had a problem with servlet to call... that it didn't found any java class (which is java class calling from servlet).
but
Java Problem Steps - Swing AWT
Java Problem Steps How to create a Jar File From the Dos prompt of a Swing program having two classes in the program with one is public class one of them
changing Background Color
changing Background Color
...
change background color of a pdf file. You can change the background color... color) to set the background color of that chunk. Here
we pass
code problem - Java Beginners
code problem Dear sir,
i'm making a program where characters of a string are being displayed having its length thru a loop, like-
s
a
t
i
s
h
i want to store them as sequence in a StringBuffer like "satish"
how
scanner problem - Java Beginners
scanner problem the program that enters sides of the triangle using scanner and outputs the area of triangle Hi Friend,
We are providing you a code that will calculate the area of triangle having base and height
is having any method same behavior of Single thread model in java?
is having any method same behavior of Single thread model in java? is having any method same behavior of Single thread model in java
displaying data based on criteria from 2 tables having same type of data - Java Beginners
displaying data based on criteria from 2 tables having same type of data ... to process search criteria without having a form.
I have recently read up about DAO pattern and I think it might be able to solve my problem. Correct me if I'm
java Problem
java Problem I want to create a binary tree for displaying members in Downline. i am creating a site for MLM(Multi-Level MArketing). tree must be dynamically populated from database. is there any help for me.
Thanks in advance
html code problem - Java Beginners
html code problem Hi,
Thank u for fast reply. In HTML form, more than one file 'action' is possible or not.
regards,
sakthi Hi friend,
In HTML in one form having One file action is possible.
Thanks
JAVA Problem
JAVA Problem Write a program that takes two parameters
1. a word
2. an array of words
It should then remove all instances of the word in the array.
For Example:
INPUT
word="ravi"
word_array = ["Chethan Bhagat
Tips & Tricks
Tips & Tricks
Here are some basic implementations of java language, which you would... screen button on the keyboard, same way we can do it through java programming
Using java script do login form having fields with condition.
Using java script do login form having fields with condition. Need a login form with username, password, phone no, email id, date fields. We should specify the conditions for each field and if we press the submit button
Tips & Tricks
Tips & Tricks
Splash
Screen in Java 6.0
Splash screens... of the application. AWT/Swing can be used to create splash screens in Java. Prior to Java
Java problem - Java Beginners
Java problem what are threads in java. what are there usage. a simple thread program in java Hi Friend,
Please visit the following link:
Thanks
java input problem - Java Beginners
java input problem I am facing a Java input problem
Change Background Picture of Slide Using Java
Change Background Picture of Slide Using Java
...;
In this example we are going to create a slide then change background picture of the
slide... to set this picture as background. To set picture as background of slide
we
Change Background of Master Slide Using Java
Change Background of Master Slide Using Java
... to create a slide then change background of the
master slide.
In this example we... constructor. Then we are creating fill for setting
background and for adding
Java Problem - Java Beginners
Java Problem Write a program 2 input a positive integer n and check wheter n is prime or not and also know the position of that number in the prime...,
Code to solve the problem :
import java.io.*;
public class PrimeNumber
resolution problem in java
resolution problem in java I designed project in java in my PC when run the same project in some other PC i can't fully view my java forms.Some said that it is resolution problem - JDBC
java programming problem Hi,
Request you to provide the source code in Java for the following programming problem :
upload .csv file data into oracle database.
please send the solution to raichallp@yahoo.in
problem on server which works fine on another - Java Server Faces Questions
problem on server which works fine on another I am having a problem when a tab is clicked, the action listener class is called
Problem in uploading java application
Problem in uploading java application I have uploaded my java application (folder created under webapps) using Filezilla FtpClient.Application... this problem
UITextfield Background Color
UITextfield Background Color In my iPhone application, i am using text field as a search field. The code is working fine but the problem is ..it's showing a background color. Though my text field color is white.
Can you please
java programming problem - Java Beginners
java programming problem Given a deck of ncards unique cards.Cut the deck iCut cards from the top portion of the deck followed by the bottom card... etc. Alternating the remaining cards go on top. The problem is to find
Problem on JAVA Programme
Problem on JAVA Programme public class AA {
int add(int i) {
int y = i;
y += 20;
if (y <= 100){ y +=30;add(y);}
System.out.println("Final Value of y : " + y);
return y;
}
public static void main
Problem with url in oracle
Problem with url in oracle hi
i m having trouble with the following code. when i run it i get the error as invalid oracle url specified. i am using... language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding
java programming problem - JDBC
java programming problem Hi,
Request you to provide a solution... problem to the following mail id :
Problem : upload excel file data into oracle database using java / j2ee.
mail id : raichallp@yahoo.co.in
Java Problem - JSP-Servlet
Java Problem What are the steps to run a java Servlet program.with examples Hi Friend,
Please visit the following link:
Thanks
Java Problem - MobileApplications
Java Problem How to Compile and run a simple J2ME Program and display output on the My mobile
code problem - Java Beginners
; Hi friend,
Code to help in solving the problem :
import java.io.... in Java visit to :
Thanks
Java implementation problem
/answers/viewqa/Java-Beginners/28578-java-implementation-problem-.html...Java implementation problem I want to implement following in java... problem in your post previews.
please consider
1. 2. points just after main
Tips 'n' Tricks
Tips 'n' Tricks
Download files data from many
URLs
This Java program... arguments separated by space. Java provides
URLConnection class that represents
code problem - Java Beginners
code problem Dear sir,
my problem is that I've a string value if this String value has "quit" then output should be "bye". i want to make this program using SWITCH CASE statement. how to implement String value in Switch plz
java implementation problem
java implementation problem I want to implement following in java code :
Main thread
Create three threads
wait for completion of stage 2 of all three threads
Access all three local variable (LC0, LC1, LC2) of threads
bulid
problem in java code - Java Beginners
problem in java code In displaying an matrix in normal java code we use two for loops...When i attended an interview.....the hr asked me to display the matrix by only using one loop....there should be any condition or other coding - Java Beginners
Problem in coding How many times do you have to roll a pair of dice before they come up snake eyes? You could do the experiment by rolling the dice... friend,
Code to help in solving the problem.
public class Stimulates
Basic problem for Java experts
Basic problem for Java experts This assignment will test your knowledge of
Arrays
Array searching
Array sorting
Array processing
Specification
An athletics club require a simple statistical analysis program for analysing lap
instalation problem - Java Beginners
instalation problem i try to install java ver 3-4-5 but. when progres going on that cant configuration. its stop when the indicator running in 1/4 progres..
thanx
code problem - Java Beginners
code problem Dear sir,
My problem is that i have some string value and in some case i want to remove all the value of this string, i tried this code-
Response.str.clear();
but it shows some error called "response package
Bid Problem - Java Beginners
in this application explain in details :
Code to help in solving the problem...!");
}
}
}
For more information on Java visit to :
Thanks
Programming problem - Java Beginners
Programming problem Good afternoon Ma'am/Sir,
Can you help me with my research? I just want to know why most Computer Science Students find it difficult to learn java programming? Based on my survey it seems that java
array problem java - Java Beginners
array problem java PLS HELP ME NOW I NEED YOU RESPONSE IMMDEATLETLY...];
int num;
Write Java statements that do the following:
a. Call the method..., respectively.
another problem.,,
2.)Suppose list is an array of five
problem with main - Java Beginners
problem with main
import javax.swing.*;
import java.awt.... a problem. when i compile it appears this message:
java.lang.NoSuchMethodError: main... it with html file.
applet.html:
Java Applet Demo
Thanks
User defined package problem - Java Beginners
of a class ?
i.e. obj.method() right !
So, if u have a class which is not having | http://www.roseindia.net/tutorialhelp/comment/96012 | CC-MAIN-2014-35 | refinedweb | 2,158 | 55.03 |
I agree, this wrapper is a good step forward. It's totally fine to continue
on that path because it is obviously better and makes it easy to switch to
autodetection anytime later by simply adding the annotation. Sorry if I got
a bit passionate about that, but as you mention I also get tired of adding
things in multiple places, and the annotations have worked well in the API
and provide a good model to emulate for consistency.
I can't share code, because these extensions to LibvirtComputingResource
that I've provided for other companies have not been open sourced. I can
speak more generically though about methods.
To answer question "a", reflection allows you to do something like:
Reflections reflections = new
Reflections("com.cloud.hypervisor.kvm.resource.wrapper");
Set<Class<? extends CommandWrapper>> wrappers =
reflections.getSubTypesOf(CommandWrapper.class);
So here in "new Reflections" we are automatically filtering for just the
wrappers that would apply to the KVM plugin.
Then to finish it off, you iterate through the wrappers and do:
ResourceWrapper annotation = wrapper.getAnnotation(ResourceWrapper.class);
citrixCommands.put(annotation.handles(), wrapper.newInstance());
Sorry, I guess that's four lines, plus the relevant for loop. And probably
a null check or something for the annotation. You also have to add the
annotation class itself, and add a line for the annotation in each wrapper,
but in the end when we add new Commands, we won't have to touch anything
but the new class that handles the command.
public @interface ResourceWrapper {
Class<? extends Command> handles();
}
There's an example of something similar to this in
KVMStoragePoolManager.java (annotation is StoragePoolInfo.java). This
example has actually been adapted from that. Also to a lesser extent in the
API server, but it is spread across a bunch of classes.
On Thu, Apr 30, 2015 at 10:41 PM, Wilder Rodrigues <
WRodrigues@schubergphilis.com> wrote:
> Hi Marcus,
>
> Thanks for the email… I’m always in for improvements. But why can’t you
> share the code?
>
> Few points below:
>
> 1. I added an subclassing example of LibvirtComputingResource because
> you mentioned it in a previous email:
>
> On 23 Apr 2015, at 17:26, Marcus <shadowsor@gmail.com> wrote:
>
>
> I mentioned the reflection model because that's how I tend to handle
> the commands when subclassing LibvirtComputingResource.
>
>
> 2. Current situation with LibvirtComputingResource on Master is:
>
> a. 67 IFs
> b. 67 private/protected methods that are used only there
> c. If a new Command is added it means we will have a new IF and a new
> private method
> e. Maintenance is hell, test is close to zero and code quality is below
> expectations
>
> That being said, the main idea with the refactor is to change structure
> only, not behaviour. So what I’m doing is to simply move the code out the
> LibvirtCompRes and write tests for it, keeping the behaviour the same - to
> be done in a next phase.
> If you look at the changes you will see that some wrappers are already
> 100% covered. However, some others have 4% or 8% (not that much though). I
> would like to refactor that as well, but that could change behaviour
> (mentioned above) which I don’t want to touch now.
>
> 3. With the new situation:
>
> a. No IFs
> b. All methods wrapped by other classes (command wrappers) - loosely
> coupled, easier to test and maintain
> c. If a new Command is added we would have to add a command wrapper and
> 1 line in the request wrapper implementation ( I know, it hurts you a bit)
> - but please bear with me for the good news.
>
> 4. the warnings are due to that:
> Hashtable<Class<? extends Command>, CommandWrapper>()
>
> No big deal.
>
> As I understood from your first paragraph we would have to annotated
> the commands classes, right? I mean, all of them.
>
> That’s something I wouldn’t do in this phase, to be honest. It might
> seem harmless to do, but I like to break things down a bit and have more
> isolation in my changes.
>
> What’s next: I will finish the refactor with the request wrapper as it
> is. For me it is no problem do add the lines now and remove them in 1 week.
> Most of the work is concentrated in the tests, which I’m trying as hard as
> I can to get them in the best way possible. Once it’s done and pushed to
> master, I will analyse what we would need to apply the annotation.
>
> But before I go to bring the kids to school, just one question:
>
> a. The “handle” value, in the annotation, would have the wrapper class
> that would be used for that command, right? Now let’s get 1 command as
> example: CheckHealthCommand. Its wrapper implementation differs per
> hypervisor (just like all the other wrapper commands do). I’m not taking
> the time to really think about it now, but how would we annotated the
> different wrappers per command?
>
> Thanks again for your time.
>
> Cheers,
> Wilder
>
>
> On 30 Apr 2015, at 22:52, Marcus <shadowsor@gmail.com> wrote:
>
> Ok. I wish I could share some code, because it isn't really as big of
> a deal as it sounds from your reasoning. It is literally just 3 lines
> on startup that fetch anything with the '@AgentExecutor' annotation
> and stores it in a hash whose key is the value from @AgentExecutor's
> 'handles' property. Then when a *Command comes it it is passed to the
> appropriate Executor class.
>
> Looking at CitrixRequestWrapper, the 3 lines I mention are almost
> identical in function to your init method, just that it uses the
> annotation to find all of the commands, rather than hardcoding them.
> We use the same annotation design for the api side of the code on the
> management server, which allows the api commands to be easier to write
> and self-contained (you don't have to update other code to add a new
> api call). It makes things easier for novice developers.
>
> This implementation is no less typesafe than the previous design (the
> one with all of the instanceof). It didn't require any casting or
> warning suppression, either, as the wrapper does.
>
> Extending LibvirtComputingResource is not ideal, and doesn't work if
> multiple third parties are involved. Granted, there hasn't been a lot
> of demand for this, nevertheless it's particularly important for KVM,
> where the Command classes are executed on the hypervisor it's not
> really feasible to just dump the code in your management server-side
> plugin like some plugins do.
>
> In reviewing the code, the two implementations are really very close.
> If you just updated init to fetch the wrappers based on either an
> annotation or the class they extend, or something along those lines so
> this method doesn't have to be edited every time a command is added,
> that would be more or less the same thing. The the KVM agent would be
> pluggable like the management server side is.
>
> On Thu, Apr 30, 2015 at 12:55 PM, Wilder Rodrigues
> <WRodrigues@schubergphilis.com> wrote:
>
> Hi Marcus,
>
> Apologies for taking so much time to reply to your email, but was, and
> still
> am, quite busy. :)
>
> I would only use reflection if that was the only way to do it. The use of
> reflection usually makes the code more complex, which is not good when we
> have java developers in all different levels (from jr. do sr) working with
> cloudstack. It also makes us lose the type safety, which might also harm
> the
> exception handling if not done well. In addition, if we need to refactor
> something, the IDE is no longer going to do few things because the
> refection
> code cannot be found.
>
> If someone will need to extend the LibvirtComputingResource that would be
> no
> problem with the approach I’m using. The CitrixResourceBase also has quite
> few sub-classes and it works just fine.
>
> I will document on the wiki page how it should be done when sub-classing
> the
> LibvirtComputingResource class.
>
> In a quick note/snippet, one would do:
>
> public class EkhoComputingResource extends LibvirtComputingResource {
>
> @Override
> public Answer executeRequest(final Command cmd) {
>
> final LibvirtRequestWrapper wrapper =
> LibvirtRequestWrapper.getInstance();
> try {
> return wrapper.execute(cmd, this);
> } catch (final Exception e) {
> return Answer.createUnsupportedCommandAnswer(cmd);
> }
> }
> }
>
>
> In the flyweight where I keep the wrapper we could have ():
>
> final Hashtable<Class<? extends Command>, CommandWrapper>
> linbvirtCommands = new Hashtable<Class<? extends Command>,
> CommandWrapper>();
> linbvirtCommands.put(StopCommand.class, new
> LibvirtStopCommandWrapper());
>
> final Hashtable<Class<? extends Command>, CommandWrapper>
> ekhoCommands = new Hashtable<Class<? extends Command>, CommandWrapper>();
> linbvirtCommands.put(StopCommand.class, new
> EkhoStopCommandWrapper());
>
> resources.put(LibvirtComputingResource.class, linbvirtCommands);
> resources.put(EkhoComputingResource.class, ekhoCommands);
>
> But that is needed only if the StopCommand has a different behaviour for
> the
> EkhoComputingResource.
>
> Once a better version of the documentation is on the wiki, I will let you
> know.
>
> On other matters, I’m also adding unit tests for all the changes. We
> already
> went from 4% to 13.6% coverage in the KVM hypervisor plugin. The code I
> already refactored has 56% of coverage.
>
> You can see all the commits here:
>
>
> Cheers,
> Wilder
>
> On 23 Apr 2015, at 17:26, Marcus <shadowsor@gmail.com> wrote:
>
> Great to see someone working on it. What sorts of roadblocks came out
> of reflection? How does the wrapper design solve the pluggability
> issue? This is pretty important to me, since I've worked with several
> companies now that end up subclassing LibvirtComputingResource in
> order to handle their own Commands on the hypervisor from their
> server-side plugins, and changing their 'resource' to that in
> agent.properties. Since the main agent class needs to be set at agent
> join, this is harder to manage than it should be.
>
> I mentioned the reflection model because that's how I tend to handle
> the commands when subclassing LibvirtComputingResource. I haven't had
> any problems with it, but then again I haven't tried to refactor 5500
> lines into that model, either.
>
> On Thu, Apr 23, 2015 at 1:17 AM, Wilder Rodrigues
> <WRodrigues@schubergphilis.com> wrote:
>
> Hi Marcus,
>
> I like the annotation idea, but reflection is trick because it hides some
> information about the code.
>
> Please, have a look at the CitrixResourceBase after the refactor I did. It
> became quite smaller and test coverage was improved.
>
> URL:
>
>
>
> The same patter is being about to Libvirt stuff. The coverage on the KVM
> hypervisor plugin already went from 4 to 10.5% after refactoring 6 commands
>
> Cheers,
> Wilder
>
> On 22 Apr 2015, at 23:06, Marcus <shadowsor@gmail.com> wrote:
>
> Kind of a tangent, but I'd actually like to see some work done to
> clean up LibvirtComputing resource. One model I've prototyped that
> seems to work is to create an annotation, such as
> 'KVMCommandExecutor', with a 'handles' property. With this annotation,
> you implement a class that handles, e.g. StartCommand, etc. Then in
> LibvirtComputingResource, the 'configure' method fetches all of these
> executors via reflection and stores them in an object. Then, instead
> of having all of the 'instanceof' lines in LibvirtComputingResource,
> the executeRequest method fetches the executor that handles the
> incoming command and runs it.
>
> I think this would break up LibvirtComputingResource into smaller,
> more testable and manageable chunks, and force things like config and
> utility methods to move to a more sane location, as well. As a bonus,
> this model makes things pluggable. Someone could ship KVM plugin code
> containing standalone command executors that are discovered at runtime
> for things they need to run at the hypervisor level.
>
> On Tue, Apr 21, 2015 at 6:27 AM, Wilder Rodrigues
> <WRodrigues@schubergphilis.com> wrote:
>
> Hi all,
>
> Yesterday I started working on the LibvirtComputingResource class in order
> to apply the same patterns I used in the CitrixResourceBase + add more unit
> tests to it After 10 hours of work I got a bit stuck with the 1st test,
> which would cover the refactored LibvirtStopCommandWrapper. Why did I get
> stuck? The class used a few static methods that call native libraries,
> which
> I would like to mock. However, when writing the tests I faced problems with
> the current Mockito/PowerMock we are using: they are simply not enough for
> the task.
>
> What did I do then? I added a dependency to EasyMock and PowerMock-EasyMock
> API. It worked almost fine, but I had to add a “-noverify” to both my
> Eclipse Runtime configuration and also to the
> cloud-plugin-hypervisor-kvm/pom.xml file. I agree that’s not nice, but was
> my first attempt of getting it to work. After trying to first full build I
> faced more problems related to ClassDefNotFoundExpcetion which were
> complaining about Mockito classes. I then found out that adding the
> PowerMockRunner to all the tests classes was going to be a heavy burden and
> would also mess up future changes (e.g. the -noverify flag was removed from
> Java 8, thus adding it now would be a problem soon).
>
> Now that the first 2 paragraphs explain a bit about the problem, let’s get
> to the solution: Java 8
>
> The VerifyError that I was getting was due to the use of the latest
> EasyMock
> release (3.3.1). I tried to downgrade it to 3.1/3.2 but it also did not
> work. My decision: do not refactor if the proper tests cannot be added.
> This
> left me with one action: migrate to Java 8.
>
> There were mentions about Java 8 in february[1] and now I will put some
> energy in making it happen.
>
> What is your opinion on it?
>
> Thanks in advance.
>
> Cheers,
> Wilder
>
>
>
> <<
> 54EEF6BE.5040401@shapeblue.com>>
>
>
>
>
> | http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201505.mbox/%3CCALFpzo7OdiP3GJFcH2uJra-sq=Yd+5cqAAJNSi-5etn3dq05vg@mail.gmail.com%3E | CC-MAIN-2019-35 | refinedweb | 2,251 | 64.3 |
Homomorphic encryption is a pretty interesting thing. It allows you to do calculations on encrypted data such that when you decrypt the results, it’s as if you did the calculations on the unencrypted data. This allows computation to happen without the person doing the computation knowing what the data actually is!
Brief History
For a long time, cryptographers wondered if fully homomorphic encryption was even possible. There were various encryption algorithms that could perform SOME operations homomorphically (RSA can do multiplication for instance!), but there weren’t any that could do ALL operations. In other words, you couldn’t execute arbitrary computations.
Those types of algorithms are called “Partially Homomorphic Encryption” or PHE.
Another problem standing in the way of fully homomorphic encryption was that many algorithms would only have a limited count of operations they could perform before error would accumulate and they would start giving incorrect answers. In essence they were limited to evaluating low degree polynomials.
Those types of algorithms are called “Somewhat Homomorphic Encryption” or SWHE.
In contrast, Fully Homomorphic Encryption (FHE) can perform an unlimited number of homomorphic operations, and it can perform any operation homomorphically. It is unbounded in both ways.
Amazingly, In 2009 Craig Gentry figured out the first fully homomorphic encryption scheme! With his setup, you can calculate both XOR and AND on encrypted bits, which makes it Turing complete. It is also able to keep errors from becoming too large by using an ingenious bootstrapping technique to decrease accumulated error. Here’s a link to his PHd thesis: A Fully Homomorphic Encryption Scheme.
Unfortunately, the current implementations of secure FHE take too much computational power to be practical in most situations – like 30 minutes to calculate an AND between 2 bits!
In this post I’m going to show you a super simple HE implementation that will be very easy to understand. It won’t be fully homomorphic, but it will be “leveled” (or, somewhat homomorphic), meaning it is Turing complete, but the count of calculations you can perform are limited due to error creeping in. It also won’t be secure – due to making it easy to understand – but it will be lightning fast.
This will be a symmetric key algorithm, but as we’ll explore in future posts, it can also be used for public key algorithms.
Why Is HE Useful?
One thing you could do with HE is store your financial transactions encrypted on a server. The server could run queries and calculations on your financial data and send back the results. You could then unencrypt the result and see what the values are, even though the server itself – which generated the values – has no idea what the numbers actually are.
Another use could be in games. Whether you are playing a first person shooter, or a real time strategy game, many different types of games send information about each player to every other player in the game. Hashes of game state can be used to make sure that everyone is in agreement about calculations to prevent a player from cheating by WRITING to a value they shouldn’t be writing to (or, at least you can detect when they do, and use majority rule to boot them out of the game), but how do you stop a player from READING a value they shouldn’t be reading?
Using HE, you could encrypt the data you need to send to players that they shouldn’t be able to read. With this, they could still do game play logic calculations on the data, and calculate hashes of the encrypted results to ensure that all players were in agreement, but with HE, they wouldn’t gain knowledge of the data they were working with.
In other words, player A could verify that player B’s state is correct and they haven’t cheated, without player A getting details about player B’s state.
In theory this could eliminate or at least help combat things like wall hacks and other “data read” based cheats. In practice there would be some complications to work out, even if it wasn’t crazy slow to calculate, but the fact that there is a path to addressing these issues is pretty exciting! People are working on improving speed, and games don’t need the same level of security that other usage cases do.
How To Do It
Here are the details of this super simple leveled homomorphic symmetric key algorithm.
By the way, all the percent signs below mean “modulus” which is just the remainder of a division. 25 % 4 = 1 for instance, because 25/4 = 6 with a remainder of 1. That remainder of 1 is what we get when we take the modulus. A term you’ll see more often if reading through this stuff on your own will be “residue”. Don’t let that word scare you, it is just another name for the remainder.
Making A Secret Key
To make a key, generate an odd random number between 2^(N-1) and 2^N. In other words, it will be N random bits, except the highest and lowest bit will be set to 1. N is the size of your secret key. Larger keys are more secure, and allow more computations to be done in a row, but they also take more storage space. If you are using a fixed size int – like say a uint32 – a larger key will make you run out of those 32 bits sooner.
key = RandomNumber(0, (1 << N) - 1) | 1 | (1 << (N - 1));
Encrypt
To encrypt a bit, the encrypted value is just the key plus the value of the unencrypted bit (0 or 1).
encryptedBit = key + value ? 1 : 0;
Decrypt
To decrypt a bit, you take the encrypted bit modulo the key, and then modulo 2.
decryptedBit = (encryptedBit % key) % 2;
XOR
To do an XOR of two encrypted bits, you just add the two values together.
xorResult = encryptedBit1 + encryptedBit2;
AND
To do an AND of two encrypted bits, you just multiply the two values together.
andResult = encryptedBit1 * encryptedBit2;
Example
Let’s run through an example to see this in action.
We’ll use a 4 bit key, and say that the key is 13 (1101 in binary).
Let’s encrypt some bits:
Let’s do some logical operations:
Notice how AND is a multiplication where XOR is an addition, and that the result of an AND operation is a larger number than an XOR operation. This means that if you are working with a specific sized number (again, such as a uint32), that you can do fewer ANDs than XORs before you run out of bits. When you run out of bits and your number has integer overflow, you have hit the cieling of this leveled HE scheme. That means that ANDs are more expensive than XORs when considering the number of computations you can do.
Ok, time to decrypt our XOR values!
XOR is looking correct, how about AND?
AND is looking good as well. Lastly let’s decrypt the compound operation:
Lookin good!
Intuition
Let’s get some intuition for why this works…
Key Generation
First up, why is it that the key needs to have it’s high bit set? Well, on one hand, larger keys are more secure, and allow more room for error accumulation so allow more operations to be done. On the other hand, this is kind of misleading to say. If you generate ANY random odd integer, there will be a highest bit set to 1 SOMEWHERE. You technically don’t need to store the zeros above that. So i guess you could look at it like you are just generating ANY random odd integer, and you could figure out N FROM that value (the position of the highest bit). Thinking about it the way we do though, it lets us specify how many bits we actually want to commit to for the key which gives us more consistent behavior, upper bound storage space, etc.
Secondly, why does the key need to be odd?
Let’s say that you have two numbers A and B where A represents an encrypted bit and B represents the encryption key. If B is even, then A % B will always have the same parity (whether it’s even or odd) as A. Since we are trying to hide whether our encrypted bit is 0 or 1 (even or odd), that makes it very bad encryption since you can recover the plain text bit by doing encryptedValue % 2. If on the other hand, B is odd, A % B will have the same parity as A only if A / B is even.
This doesn’t really make much of a difference in the scheme in this post, because A / B will always be 1 (since the encrypted bit is the key plus the plain text bit), but in the next scheme it is more important because A / B will be a random number, which means that it will be random with a 50/50 chance whether or not the parity of the encrypted bit matches the parity of the plain text bit. Since it’s an even chance whether it matches or not, that means that an attacker can’t use that information to their advantage.
While it’s true that when generating a random key, there is a 50/50 chance of whether you will get an even or odd key, you can see how we’d be in a situation where 75% of the time the parity of the ciphertext would match the parity of the plaintext if we allowed both even and off keys.
That would mean that while an attacker couldn’t know for CERTAIN whether an encrypted bit is 1 or 0 based on the cipher text, they can guess with 75% confidence that the unencrypted bit will just be the cipher text % 2, which is no good! So, we are better off sticking with an odd numbered key in this scheme. But again, that won’t really matter until the next post!
XOR as Addition
I know that I’m going to butcher this explanation a bit in the eyes of someone who knows this math stuff better than me. If you are reading this and see that I have indeed done that, please drop me a line or leave a comment and let me know what I’ve missed or could explain better. I suspect there’s something about rings going on here (;
Believe it or not, when you add two numbers together and then take the modulus, you get the same answer as if you did the modulus on the two numbers, added them together, and then took the modulus again.
In other words, adding two numbers can be seen as adding their residue (remainder).
Let me show you an example.
Let’s try another one. I’m picking these numbers “randomly” out of my head 😛
OK makes sense, but who cares about that?
Well, believe it or not, 1 bit addition is the same as XOR! This means that you can add numbers together, which adds the modulus of their key together, which then in turn adds that number mod 2 together, to preserve the encrypted parity (odd or even-ness).
Check out this 2 bit binary math. Keep in mind that with 1 bit results, you would only keep the right most binary digit. I’m showing two digits to show you that it is in fact binary addition, and that the right most bit is in fact the same as XOR.
One thing to note before we move on is that since we are doing a modulus against the key, when the remainder gets to be too large it rolls over. When it rolls over, we start getting the wrong answers and have hit our ceiling of how many operations we can do. So, our encrypted value modulo the key divided by the key can be seen as where we are at by percentage towards our error ceiling.
To avoid hitting the problem of error getting too high too quickly and limiting your calculation count too much you can increase the key size. When you do that you’ll then run out of bits in your fixed size integer storage faster. To avoid THAT problem you can use “multi precision math libraries” to allow your integers to use an arbitrary number of bytes. This is what many real crypto algorithms use when they need to deal with very large numbers.
AND as Multiplication
Similar to the above, when you multiply two numbers and take a modulus of the result, it’s the same as if you took the modulus of the two numbers, multiplied that, and then took the modulus of the result.
In other words, when you multiply two numbers, you can think of it as also multiplying their residue (remainder).
Using the first example numbers from above:
And the second:
A bit of a coincidence that they both worked out to 4 this time 😛
Similar to XOR being the same as 1 bit addition, 1 bit multiplication is actually the same as AND, check it out:
Since AND multiplies residue, and XOR adds residue, and residue is what limits our homomorphic instruction count, you can see that AND is a more expensive operation compared to XOR, since it eats into our instruction budget a lot faster.
Error In Action
To see why rolling over is a problem, let’s say that our key is 9 and we want to XOR two encrypted bits 8 and 1, which represent 0 and 1 respectively.
To do an XOR, we add them together: 8 + 1 = 9.
Now, when we decrypt it we do this: (9 % 9) % 2 = 0
That result tells us that 0 XOR 1 is 0, which is incorrect! Our residue got too large and we hit the ceiling of our homomorphic instruction budget.
If the first bit was 6 instead of 8, the result of the XOR would have been 7, and (7 % 9) % 2 comes out to 1. That re-affirms to us that if we are under the error budget, we are good to go, but if our residue gets too large, we will have problems!
Sample Code
// Note that this encryption scheme is insecure so please don't actually use it // in production! A false bit with a given key is the same value every time, and // so is a true bit. Also, the encrypted true bit value will always be the // encrypted false bit plus 1. Even worse, an encrypted false bit is the key itself! // This is just for demonstration purposes to see how the basics of homomorphic // encryption work. The next blog post will increase security. . const size_t c_numKeyBits = 6; . return RandomUint64(0, (1 << c_numKeyBits) - 1) | 1 | (1 << (c_numKeyBits - 1)); } //================================================================================= bool Decrypt (uint64 key, uint64 value) { return ((value % key) % 2) == 1; } //================================================================================= uint64 Encrypt (uint64 key, bool value) { uint64 ret = key + (value ? 1 : 0); Assert(Decrypt(key, ret) == value); return ret; } //================================================================================= uint64 XOR (uint64 A, uint64 B) { return A + B; } //================================================================================= uint64 AND (uint64 A, uint64 B) { return A * B; } //================================================================================= int GetErrorPercent (uint64 key, uint64 value) { // Returns what % of maximum error this value has in it. When error >= 100% // then we have hit our limit and start getting wrong answers. return int = Encrypt(key, false); uint64 trueBit = Encrypt(key, true); // Verify truth tables for XOR and AND Assert(Decrypt(key, XOR(falseBit, falseBit)) == false); Assert(Decrypt(key, XOR(falseBit, trueBit )) == true ); Assert(Decrypt(key, XOR(trueBit , falseBit)) == true ); Assert(Decrypt(key, XOR(trueBit , trueBit )) == false); Assert(Decrypt(key, AND(falseBit, falseBit)) == false); Assert(Decrypt(key, AND(falseBit, trueBit )) == false); Assert(Decrypt(key, AND(trueBit , falseBit)) == false); Assert(Decrypt(key, AND(trueBit , trueBit )) == true ); // report the results for the first iteration of the loop if (index == 0) { printf("Key 0x%" PRIx64 ", false 0x%" PRIx64 ", true 0x%" PRIx64 "n", key, falseBit, trueBit); printf(" [0 xor 0] = 0 0x%" PRIx64 " xor(+) 0x%" PRIx64 " = 0x%" PRIx64 " (%i err=%i%%)n", falseBit, falseBit, XOR(falseBit, falseBit), Decrypt(key, XOR(falseBit, falseBit)) ? 1 : 0, GetErrorPercent(key, XOR(falseBit, falseBit))); printf(" [0 xor 1] = 1 0x%" PRIx64 " xor(+) 0x%" PRIx64 " = 0x%" PRIx64 " (%i err=%i%%)n", falseBit, trueBit , XOR(falseBit, trueBit ), Decrypt(key, XOR(falseBit, trueBit )) ? 1 : 0, GetErrorPercent(key, XOR(falseBit, trueBit ))); printf(" [1 xor 0] = 1 0x%" PRIx64 " xor(+) 0x%" PRIx64 " = 0x%" PRIx64 " (%i err=%i%%)n", trueBit , falseBit, XOR(trueBit , falseBit), Decrypt(key, XOR(trueBit , falseBit)) ? 1 : 0, GetErrorPercent(key, XOR(trueBit , falseBit))); printf(" [1 xor 1] = 0 0x%" PRIx64 " xor(+) 0x%" PRIx64 " = 0x%" PRIx64 " (%i err=%i%%)n", trueBit , trueBit , XOR(trueBit , trueBit ), Decrypt(key, XOR(trueBit , trueBit )) ? 1 : 0, GetErrorPercent(key, XOR(trueBit , trueBit ))); printf(" [0 and 0] = 0 0x%" PRIx64 " and(*) 0x%" PRIx64 " = 0x%" PRIx64 " (%i err=%i%%)n", falseBit, falseBit, AND(falseBit, falseBit), Decrypt(key, AND(falseBit, falseBit)) ? 1 : 0, GetErrorPercent(key, XOR(falseBit, falseBit))); printf(" [0 and 1] = 0 0x%" PRIx64 " and(*) 0x%" PRIx64 " = 0x%" PRIx64 " (%i err=%i%%)n", falseBit, trueBit , AND(falseBit, trueBit ), Decrypt(key, AND(falseBit, trueBit )) ? 1 : 0, GetErrorPercent(key, XOR(falseBit, trueBit ))); printf(" [1 and 0] = 0 0x%" PRIx64 " and(*) 0x%" PRIx64 " = 0x%" PRIx64 " (%i err=%i%%)n", trueBit , falseBit, AND(trueBit , falseBit), Decrypt(key, AND(trueBit , falseBit)) ? 1 : 0, GetErrorPercent(key, XOR(trueBit , falseBit))); printf(" [1 and 1] = 1 0x%" PRIx64 " and(*) 0x%" PRIx64 " = 0x%" PRIx64 " (%i err=%i%%)n", trueBit , trueBit , AND(trueBit , trueBit ), Decrypt(key, AND(trueBit , trueBit )) ? 1 : 0, GetErrorPercent(key, XOR(trueBit , trueBit ))); } } // Do multi bit addition as an example of using compound circuits to // do meaningful work. const size_t c_numBitsAdded = 5; printf("nDoing 10000 Multibit Additions. Details of first one:n"); std::array<uint64, c_numBitsAdded> numberAEncrypted; std::array<uint64, c_numBitsAdded> numberBEncrypted; std::array<uint64, c_numBitsAdded> result); } // make sure that the results match, keeping in mind that the 4 bit encryption may have rolled over Assert(resultDecrypted == ((numberA + numberB) % (1 << c_numBitsAdded))); //=%i%%)n", bitIndex, numberAEncrypted[bitIndex], Decrypt(key, numberAEncrypted[bitIndex]), GetErrorPercent(key, numberAEncrypted[bitIndex])); printf("+n"); for (int bitIndex = 0; bitIndex < c_numBitsAdded; ++bitIndex) printf(" B[%i] = 0x%" PRIx64 " (%i err=%i%%=%i%%)n", bitIndex, resultEncrypted[bitIndex], Decrypt(key, resultEncrypted[bitIndex]), GetErrorPercent(key, resultEncrypted[bitIndex])); printf("result decrypted = %" PRId64 "n", resultDecrypted); } } WaitForEnter(); return 0; }
Here is the output of a run of the program:
What if I Need Constants?!
If you are thinking how you might actually use this code in a real setting, you might be thinking to yourself “it’s great to be able to multiply two encrypted numbers together, but what if I just need to multiply them by a constant like 43?”
Well, interestingly, you can literally just use 0 and 1 in this scheme as constants to perform operations against the encrypted bits.
The reason that this works is that you can see 0 and 1 as just very poor encryptions 😛
(0 % KEY) % 2 = 0
(1 % KEY) % 2 = 1
As long as KEY is >= 2, the above is always true, no matter what the key actually is!
So there you go, add your own constants into the calculations all you want. They also happen to have very low residue/error (actually, they have the least amount possible!), so are much more friendly to use, versus having someone provide you with an encrypted table of constants to use in your calculations. It’s also more secure for the person doing the encrypting for them to provide you less encrypted data that you know the plain text for. It limits your (and anyone else’s) ability to do a known plain text attack.
The Other Shoe Drops
You might notice that in our scheme, given the same key, every true bit will be the same value, and every false bit will be the same value. Unfortunately, the true bit is also always the false bit + 1. As an attacker, this means that once you have seen both a true bit and a false bit, you will then have broken the encryption.
Even worse, when you encrypt a false bit, it gives you back the key itself!
We’ll improve that in the next post by adding a few more simple operations to the encryption process.
This leveled HE encryption scheme comes directly from the paper below. If you want to give it a look, what we covered is only part of the first two pages!
Fully Homomorphic Encryption over the Integers
The links below are where I started reading up on HE. They go a different route with FHE that you might find interesting, and also have a lot more commentary about the usage cases of HE:
The Swiss Army Knife of Cryptography
Building the Swiss Army Knife
In the scheme in those links, I haven’t figured out how multiplication is supposed to work yet (or bootstrapping, but one thing at a time). If you figure it out, let me know! | https://blog.demofox.org/2015/09/05/super-simple-symmetric-leveled-homomorphic-encryption-implementation/ | CC-MAIN-2022-27 | refinedweb | 3,490 | 65.35 |
Subject: Re: [boost] expected/result/etc
From: Sam Kellett (samkellett_at_[hidden])
Date: 2016-02-04 09:45:32
On 4 February 2016 at 13:45, Michael Marcin <mike.marcin_at_[hidden]> wrote:
> On 2/4/2016 7:06 AM, Sam Kellett wrote:
>
>>
>>> Ordinarily yes. In this case however, the Boost-lite macros have the
>>> same effect as the Boost ones, so redefining them is mostly safe,
>>> albeit with annoying warnings.
>>>
>>>
>> that's obviously not true seeing as somebody hit this problem seemingly
>> almost immediately
>>
>>
> Eh? That's exactly what happened. Annoying warnings with no other issues.
sorry i don't think that came off how i meant it... what i mean is this is
kinda asking for trouble. what happens if the boost macro changes?
wouldn't something like this be better:
#ifdef BOOST_XXX
#define MY_XXX BOOST_XXX
#else
#define MY_XXX /* boost_lite thing here */
#endif
redefining a macro in somebody else's 'namespace' is akin to opening up the
std namespace to redefine vector.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2016/02/227504.php | CC-MAIN-2021-43 | refinedweb | 184 | 68.36 |
Hi! I have TestCase, where in one requset i get data (some number), and this data i must insert in another soap request, but with plus(+) one (1). I write ${path to data...+1} but it isn`t work. How can i do that?
Solved!
Go to Solution.
Hi,
Take the following example. I have a request where the response would contain a x = 1:
I am extracting this value into a test case property:
An then I am sending it as a parameter while also incrementing its value by 1:
Here's how the expression works:
// this is a way to tell ReadyApi that what will be inside { ... } will be a groovy script
${=...}
// reading the test case property named 'x'
context.expand('${#TestCase#x}')
// the properties are always strings in ReadyApi, so we need to parse the value to int
context.expand(...).toInteger()
// and finally increment by 1
...+ 1
View solution in original post
Is it nessesary using http get data? Because i using get data, then i choose my previous request(${Show limit#Response#declare namespace out=''; //out:response[1]/serviceResponse[1]/card[1]/cardLimit[1]/cardPosLimit[1]}) and if i add, in the end +1, it isn`t working
Well... I don't know if you can do it directly. But extracting the data in a different step as shown in my previous reply should make the test cleaner so I would advise you to do so.
HI @PavloRaketa
Oh - is this related to your other post on the forum?
you just want to concatenate a '+' symbol before the parameterised value - correct?
Can you see my 'Authorization' query parameter in the screenshot above?
I would just add a '+' symbol in front of the parameter value.
So currently the value specified is as follows:
${#[Generate Access Token#Generate Access Token#Properties 1]#access_token}
I would just add a plus symbol in here so it reads
+${#[Generate Access Token#Generate Access Token#Properties 1]#access_token}
Does this help? If not - if you could provide a bit more info - that would help us
Cheers,
Richie
test post - post added AFTER my last
NOTE to everyone. please ignore this post - I'm not seeing the latest additions to posts so I'm just trying to work out whats going on - Lucian hadn't responded to this at all before I'd added my post - as soon as I respond I then saw Lucian's | https://community.smartbear.com/t5/API-Functional-Security-Testing/How-to-add-some-number-in-data/m-p/185350 | CC-MAIN-2021-21 | refinedweb | 403 | 62.88 |
Python: Learn Python in 1 Hour
This is a Python tutorial. Spend 1 hour, and you will have a basic understanding of the language.
Examples on this page are based on Python 2.7.
For python 3, see: Python 3 Basics.
Printing
# -*- coding: utf-8 -*- # python 2 print 3 print 3, 4
In python 2,
Strings
Strings are enclosed using 'single' quote or "double" quote.
# -*- coding: utf-8 -*- # python 2 b = 'rabbit' # single quotes a = "tiger" # double quotes print a, b # prints 「tiger rabbit」
You can use
\n for linebreak, and
\t for tab. 2. Lexical analysis — Python v2.7.6 documentation#string-literals
Single quote and double quote syntax have the same meaning, except in char escape.
Quoting Raw String 「r"…"」
Add
r in front of the string quote symbol for raw string. This way, backslash characters will NOT be interpreted as escapes. (“r” for “raw”)
c = r"this\n and that" print c # prints a single line
Triple Quotes for Multi-Line String
To quote a string of multiple lines, use triple quotes like this
'''…''' or
"""…""".
d = """this will be printed in 3 lines""" print d
For detail, see: Python: Quote String
Unicode in String or Source File
If anywhere in your source code file contains Unicode characters, the first or second line should be:
# -*- coding: utf-8 -*-
Any string containing Unicode characters should have “u” prefix, ➢ for example:
u"i ♥ cats".
# -*- coding: utf-8 -*- # python 2 a = u"I ♥ cats" # string with unicode heart ♥
For detail, see: Python: Unicode Tutorial 🐍.
Substring
string[begin_index:end_index] → returns a substring of string with index begin_index to end_index.
- Index starts at 0.
- The returned string does not include end_index.
- Index can be negative, which counts from the end.
# -*- coding: utf-8 -*- # python 2 print "01234567"[1:4] # 123
# -*- coding: utf-8 -*- # python 2 b="01234567" print b[1:4] # 123 print b[1:-1] # 123456 print b[-2:-1] # 6
String Length
len(str) → returns the number of chars in is string str.
print len("abc") # 3
String Join
Join string:
string + string.
print "abc" + " xyz" # "abc xyz"
String Repeat
String can be repeated using
*.
print "ab" * 3 # "ababab"
〔➤see Python: String Methods〕
Arithmetic
# -*- coding: utf-8 -*- # python 2 print 3 + 4 # 7 print 3 - 4 # -1 print 3 + - 4 # -1 print 3 * 4 # 12
# -*- coding: utf-8 -*- # python 2 # quotient # dividing two integers is integer print 11 / 5 # 2 # quotient with a float number print 11 / 5. # 2.2 # integer part of quotient print 11 // 5 # 2 print 11 // 5. # 2.0 # remainder, modulo print 11 % 5 # 1 print 11 % 5. # 1.0 # quotient and remainder print divmod(11, 5) # (2, 1) # quotient and remainder print divmod(11, 5.) # (2.0, 1.0)
# -*- coding: utf-8 -*- # python 2 # power, exponential print 2 ** 3 # 8 # square root print 3**(1/2.) # 1.73205080757
In Python, power is
**. The
^ is used for bitwise xor. 〔➤see Python 3: Operators〕
Warning: in Python 2,
11/5 returns 2, not 2.2. Use float
11/5..
For sine, cosine, log, …, see: 9.2. math — Mathematical functions — Python v2.7.6 documentation
Convert to {int, float, string}
Python doesn't automatically convert between {int, float, string}.
- Convert to int:
int(3.2).
- Convert to float:
float(3).
- Convert to string, use
repr(123)or
string.format(…)method. ➢ for example:
"integer {:d}, float {:f}".format(3, 3.2). 〔➤see Python: Format String〕
- You can write with a dot after the number as float, like this:
3..
Assignment Operators
# -*- coding: utf-8 -*- # python 2 # add and assign c = 0 c += 1 print c # 1 # substract and assign c = 0 c -= 2 print c # -2 # multiply and assign c = 2 c *= 3 print c # 6 # exponent and assign c = 3 c **= 2 print c # 9 # divide and assign c = 7 c /= 2 print c # 3 Note: not 3.5 # modulus (remainder) and assign c = 13 c %= 5 print c # 3 # quotient and assign c = 13 c //= 5 print c # 2
Note: Python doesn't support
++ or
--.
Warning:
++i may not generate any error, but it doesn't do anything.
For bitwise and other operators, see: Python 3: Operators.
True & False
False like things, such as 0, empty string, empty array, …, all evaluate to
False.
The following evaluate to
False:
False. A builtin Boolean type.
None. A builtin type.
0. Zero.
0.0. Zero, float.
"". Empty string.
[]. Empty list.
(). Empty tuple.
{}. Empty dictionary.
set([]). Empty set.
frozenset([]). Empty frozen set.
# -*- coding: utf-8 -*- # python 2 my_thing = [] if my_thing: print "yes" else: print "no" # prints no
Conditional: if then else
#-*- coding: utf-8 -*- # python x = -1 if x < 0: print 'neg'
#-*- coding: utf-8 -*- # python x = -1 if x < 0: print 'negative' else: print '0 or positive'
#-*- coding: utf-8 -*- # python # Examples of if x = -1 if x<0: print 'neg' elif x==0: print 'zero' elif x==1: print 'one' else: print 'other' # the elif can be omitted.
Loop, Iteration
while loop.
#-*- coding: utf-8 -*- # python x = 1 while x < 9: print x x += 1
for loop.
# -*- coding: utf-8 -*- # python 2 # creates a list from 1 to 3. (does NOT include 4) a = range(1,4) for x in a: print x
The
range(m,n) function gives a list from m to n, not including n.
Python also supports
break and
continue to exit loop.
break→ exit loop.
continue→ skip code and start the next iteration.
#-*- coding: utf-8 -*- # python for x in range(1,9): print 'yay:', x if x == 5: break
List
Creating a list.
a = [0, 1, 2, "more", 4, 5, 6] print a
Counting elements:
a = ["more", 4, 6] print len(a) # prints 3
Getting a element. Use the syntax
list[index]. Index start at 0. Negative index counts from right. Last element has index -1.
a = ["more", 4, 6] print a[1] # prints 4
Extracting a sequence of elements (aka sublist, slice):
list[start_index:end_index].
# -*- coding: utf-8 -*- a = [ "b0", "b1", "b2", "b3", "b4", "b5", "b6"] print a[2:4] # → ['b2', 'b3']
WARNING: The extraction does not include the element at the end index. For example,
myList[2:4] returns only 2 elements, not 3.
Modify element:
list[index] = new_value
# -*- coding: utf-8 -*- a = ["b0", "b1", "b2"] a[2] = "two" print a # → ['b0', 'b1', 'two']
A slice (continuous sequence) of elements can be changed by assigning to a list directly. The length of the slice need not match the length of new list.
# -*- coding: utf-8 -*- # python 2 xx = [ "b0", "b1", "b2", "b3", "b4", "b5", "b6"] xx[0:6] = ["two", "three"] print xx # ['two', 'three', 'b6']
Nested Lists. Lists can be nested arbitrarily. Append extra bracket to get element of nested list.
# -*- coding: utf-8 -*- bb = [3, 4, [7, 8]] print bb # [3, 4, [7, 8]] print bb[2][1] # 8
List Join. Lists can be joined with plus sign.
b = ["a", "b"] + [7, 6] print b # prints ['a', 'b', 7, 6]
Tuple
Python has a “tuple” type. It's like list, except that that the elements cannot be changed, nor can new element added.
Syntax for tuble is using round brackets () instead of square brackets. The brackets are optional when not ambiguous, but best to always use them.
# -*- coding: utf-8 -*- # python # tuple t1 = (3, 4 , 5) # a tuple of 3 elements print t1 # (3, 4 , 5) print t1[0] # 3 # nested tuple t2 = ((3,8), (4,9), ("a", 5, 5)) print t2[0] # (3,8) print t2[0][0] # 3 # a list of tuples t3 = [(3,8), (4,9), (2,1)] print t3[0] # (3,8) print t3[0][0] # 3
〔➤see Python: What's the Difference Between Tuple & List?〕
Python Sequence Types
In Python, {string, list, tuple} are called “sequence types”. Here's example of operations that can be used on sequence type.
# length ss = [0, 1, 2, 3, 4, 5, 6] print len(ss) # 7
# ith item ss = [0, 1, 2, 3, 4, 5, 6] print ss[0] # 0
# slice of items ss = [0, 1, 2, 3, 4, 5, 6] print ss[0:3] # [0, 1, 2]
# slice of items with jump step ss = [0, 1, 2, 3, 4, 5, 6] print ss[0:10:2] # [0, 2, 4, 6]
# check if a element exist ss = [0, 1, 2, 3, 4, 5, 6] print 3 in ss # True. (or False)
# check if a element does NOT exist ss = [0, 1, 2, 3, 4, 5, 6] print 3 not in ss # False
# concatenation ss = [0, 1] print ss + ss # [0, 1, 0, 1]
# repeat ss = [0, 1] print ss * 2 # [0, 1, 0, 1]
# smallest item ss = [0, 1, 2, 3, 4, 5, 6] print min(ss) # 0
# largest item ss = [0, 1, 2, 3, 4, 5, 6] print max(ss) # 6
# index of the first occurence ss = [0, 1, 2, 3, 4, 5, 6] print ss.index(3) # 3
# total number of occurences ss = [0, 1, 2, 3, 4, 5, 6] print ss.count(3) # 1
Dictionary: Key/Value Pairs
A keyed list in Python is called “dictionary” (known as Hash Table or Associative List in other languages). It is a unordered list of pairs, each pair is a key and a value.
#-*- coding: utf-8 -*- # python # define a keyed list aa = {"john":3, "mary":4, "jane":5, "vicky":7} print "aa is:", aa # getting value from a key print "mary is:", aa["mary"] # add a entry aa["pretty"] = 99 print "added pretty:", aa # delete a entry del aa["vicky"] print "deleted vicky", aa # get just the keys print "just keys", aa.keys() # to get just values, use “.values()” # check if a key exists print "is mary there:", aa.has_key("mary")
Loop Thru List/Dictionary
Here is a example of going thru a list by element.
myList=['one', 'two', 'three', 'infinity'] for x in myList: print x
You can loop thru a list and get both {index, value} of a element. Example:
myList=['one', 'two', 'three', 'infinity'] for i, v in enumerate(myList): print i, v # 0 one # 1 two # 2 three # 3 infinity
The following construct loops thru a dictionary, each time assigning both keys and values to variables.
myDict = {'john':3, 'mary':4, 'jane':5, 'vicky':7} for k, v in myDict.iteritems(): print k, ' is ', v
〔➤see Python: Map Function to List〕
Module & Package
A library in Python is called a module. A collection of module is called a package.
To load a module, call
import module_name. Then, to use a function in the module, use
module_name.function_name(…).
# -*- coding: utf-8 -*- # python # import the standard module named os import os # example of using a function print 'current dir is:', os.getcwd()
# -*- coding: utf-8 -*- # python import os # print all names exported by the module print dir(os)
〔➤see Python: List Available Modules, Module Search Paths, Loaded Modules〕
Defining a Function
The following is a example of defining a function.
def myFun(x,y): """myFun returns x+y.""" result = x+y return result print myFun(3,4) # 7
The string immediately following the first line is the function's documentation.
A function can have named optional parameters. If no argument is given, a default value is assumed. Example:
def myFun(x, y=1): """myFun returns x+y. Parameter y is optional and default to 1""" return x+y print myFun(3,7) # 10 print myFun(3) # 4
〔➤see Python: Function Optional Parameter〕
Classes and Objects
Example:
# -*- coding: utf-8 -*- # python 2 # in the following, we define a set of data and functions as a class, and name it X1 class X1: # by convention, class name starts with Cap letter """I'm a class extempore! =(^o^)= I do random things. """ i = 1 # a piece of data def f1(self): # no args return "f1 called" def f2(self, a): return a+1 # create a object of the class X1. This is called “instantiating a class”. x = X1() # Data or functions defined in a class are called the class's attributes or methods. To use them, append a dot and their name after the object's name. print "value of attribute i is:", x.i # 1 print "f1 result is:", x.f1() # "f1 called" print "f2 result is:", x.f2(3) # 4 # In the definition of function inside a class, the first parameter “self” is necessary. It is just side-effect of the language design. # The first line in the class definition is the class's documentation. It can be accessed thru the __doc__ attribute. print "X1's doc string is:", x.__doc__ # var inside the class can be change like this x.i = 400 # new data can be added to the class x.j = 4 print x.j # 4 # A class's method can also be overridden x.f2 = 333 # # the following line will no longer work # print x.f2(3)
9. Classes — Python v2.7.6 documentation
Writing a Module
Here's a basic example. Save the following line in a file and name it
mm.py.
def f3(n): return n+1
To load the file, use import
import mm. To call the function, use
mm.f3. Example:
import mm # import the module print mm.f3(5) # calling its function. prints 6 print mm.__name__ # list its functions and variables
〔➤see Python: How to Write a Module〕 | http://xahlee.info/perl-python/python_basics.html | CC-MAIN-2016-40 | refinedweb | 2,200 | 72.56 |
One of the joys of developing with .NET is, a significant amount of the ground work which we previously had to code ourselves is now part of the framework. In this article, I show methods for performing HTTP GETs in C# using the WebClient and the StreamReader. I'll use these methods in future articles.
WebClient
StreamReader
First, let's introduce Stream and StreamReader, which can both be found in the System.IO namespace. The StreamReader implements a TextReader to read UTF-8 characters by default from a stream (the source), which makes it ideal for reading from a URI. Take note that StreamReader is different from Stream, which reads bytes.
Stream
System.IO
TextReader
For sending data to and from a URI, .NET provides the WebClient class which can be found in the System.Net namespace. Several methods are available to enable us to send and receive files and data, both synchronously and asynchronously. The method we are interested in here is OpenRead(URI), which returns data from the URI as a Stream.
System.Net
OpenRead(URI)
The basic code to read from our URI can be achieved in three lines. We create our WebClient instance, create a Stream from the WebClient, and then read into a StreamReader until the end of the file, like so:
using System.IO;
using System.Net;
String URI = "";
WebClient webClient = new WebClient();
Stream stream = webClient.OpenRead(URI);
String request = reader.ReadToEnd();
After we have run over this code, the contents of somepage.html will be in the request string variable. This is all great, but we are presuming that the request here is faultless.. i.e., no exceptions are thrown. With exception handling being so easy in .NET, there's no excuse not to make benefit of it... although from experience, it seems not everyone is of the same opinion...
Let's wrap our Stream requests into a try-catch loop. We can catch a WebException to clearly identify what has gone wrong, and deal with it nicely.
try
catch
WebException
try
{
WebClient webClient = new WebClient();
Stream stream = webClient.OpenRead(URI);
String request = reader.ReadToEnd();
}
catch (WebException ex)
{
if (ex.Response is HttpWebResponse)
{
switch (((HttpWebResponse)ex.Response).StatusCode)
{
case HttpStatusCode.NotFound:
response = null;
break;
default:
throw ex;
}
}
}
We can further optimize the code by the wrapping using(...) around the WebClient/Stream, but that's beyond the scope of this article.
using(...)
If you have a URI which requires authentication, you can add a NetworkCredential to the WebClient reference before you call the OpenRead method, like so:
NetworkCredential
OpenRead
WebClient webClient = new WebClient();
webClient.Credentials = new NetworkCredential(username, password);
Stream stream = webClient.OpenRead(URI);
A real world example of using the above would be retrieving a list of your latest Tweets from Twitter. You need to pass your username and password to be able to get to the feed. The example download uses this as a demonstration, so you will need to add your own Twitter username and password.
No changes.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
var uri = "";
var result = "";
try {
using (var webClient = new WebClient()) {
using (var stream = webClient.OpenRead(uri)) {
using (var streamReader = new StreamReader(stream)) {
result = streamReader.ReadToEnd();
}
}
}
} catch (Exception ex) {
var wtf = ex.Message;
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/33798/HTTP-GET-with-NET-WebClient | CC-MAIN-2017-43 | refinedweb | 578 | 58.28 |
Hello fellow training hackers.
I do not know if many of you are familiar with ruby, but since it is a useful scripting language, that hasn't been covered too much here on Null Byte, I thought why not do some How-tos about it now and then.
The idea is to write simple scripts and then explain them step by step, each time learning new ruby fonctionnalities . I will NOT go into the basics of programming in this guide nor will I explain in detail some of the obvious code.
Intro
First of all, here is the pastebin for the full code before we break it down in detail :
Sorry for the formatting that was messed up when copy pasting on the SSH connection part.
Disclaimer : The script is so slow at trying passwords you DO NOT want to use this as a means to bruteforce SSH, this was made purely to show some ruby code for those wishing to learn, Ruby is not the language of performance, and this script will never be as fast as many of the bruteforcers that are already out there.
Now that we passed the formalities, let's head right into it.
prerequisites : install both the net-ssh and net-ping gem (the second one is not necessary for the script to work)
to install gems you might use a command such as this :
gem install net-ssh
You may encounter some dependencies problem with the gem install command, but ruby is quite clear with its error messages and you should be able to figure out by yourselves how to install them (i believe in you :D)
Step 1: Now onto the Code (Warm Up Part) :
These 2 lines are pretty simple, we import the libraries (called gems in ruby) we just installed, because we are going to need some functions in them.
Here we can see a classic if statement. In this case, it checks if the number of arguments entered is 3, and if not, gives a nice, friendly message to the user and terminates the program.
As you can probably guess from this check, the command line arguments in ruby are stored into an array named ARGV, of which we check the length.
Here we want precisely 3 arguments that are the following :
target, user, and wordlist. This is all we need. We have to give the arguments in this precise order because the array comes in the order the arguments are given in command line. example run :
ruby ssh-bruteforce.rb 10.55.33.22 admin rockyou.txt
There are other ways to treat argument and options in Ruby, but these are reserved for later ;)
Step 2: Connectivity Testing
This one requires explaining. It's a very ruby-ish structure that you won't see anywhere else (I think).
The first line uses the function Net::Ping::ICMP.new() to create a new ICMP object tied to the IP of the target. We will later use the .ping method on this object to ping the IP the ICMP object is associated with.
What the loop does is 5.times do {block}. as its written it executes the block 5 times like a for loop would. The .count method does put into the variable network the number of times the pass on the block has returned true.
We are basically telling ruby to count how many times the block will return true when executing 5 times.
After this if the connectivity was poor we terminate the programm like so :
Two interesting things here.
Firt the unless condition, that is the counterpart of the IF, and executes the block unless the condition is false.
Second, the abort keyword that is a combination of print and exit.
The abort word prints the message then exits the program.
Now that the boring work is done, we can go to the main part, the actual bruteforce! :D
Step 3: Main Code
First the structure :
random-array do |element-of-array|
{block}
end
is a classic ruby structure that loops until the end of the array, element-of-array being a local variable for the block.
File.foreach(file) makes an array whose elements are the lines of the file. Adding the index clause associates to each line the place it occupies in the array. So in this case the loop we explained above is going to last for as many lines as there are in the wordlist file or until we tell it to stop inside the block.
Second the .chomp method : this method returns the string it is applied on without any newline or carriage return (\n and \r)
Third print statement : we saw puts earlier, puts always appends a newline at the end of the string when print does not. We use the #{variable} syntax to print the variable value and a carriage return "\r" at the beginning of the line, so each subsequent line does erase the one before it. This is meant to keep track of how many passwords were tested withtout flooding the screen.
Fourth the begin rescue structure :
begin
rescue error
{block1}
else
{block2}
end
In our case this structure allow us to continue the program even if an error occurs (which we know is going to happen if we try to connect to ssh with a wrong password) rescue basically allows us to control the error handling for a certain error.
the begin and end statements allow us to tell ruby in which part of the program to rescue the error in such a structure.
Here we continue as normal if the error is present, and if it's not, then we have found the password and can stop the program with a good 'ol abort statement (printing the password of course).
Net::SSH::AuthenticationFailed is the error we know we are going to get everytime we try a wrong password.
The main part of the "work" itself is done by NET::SSH.start() that tries to establish an SSH connection with the parameters we give to it.
target and user are the only two mandatory parameters this requires, the rest are given as options with the structure :option => value.
important thing to note is the method authentication password Of course we dont want to use keys to establish the ssh session. Also, the number of password prompts is important so when the password fails, ruby does not ask you to manually retry the password via a prompt.
If the connection is established, no error rescued, we enter the else block and we abort the programm printing the password. Voila !
Conclusion
I think that is all, let me know what you think about this kind of article. I might do more if some of you guys like it and i feel like it.
Also, please excuse me for the poor formatting of the post, I'm not used to writing things on Null Byte really.
If any Questions/Suggestions, let me know. Cheers :D
Want to start making money as a white hat hacker? Jump-start your hacking career with our 2020 Premium Ethical Hacking Certification Training Bundle from the new Null Byte Shop and get over 60 hours of training from cybersecurity professionals.
Other worthwhile deals to check out:
2 Comments
Great article! I really like how you explained this! Thank you!
Great article, we don't hear enough about ruby. It's great language.
Cheers,
Washu
Share Your Thoughts | https://null-byte.wonderhowto.com/how-to/ruby-simple-ssh-bruteforcer-0165606/ | CC-MAIN-2022-27 | refinedweb | 1,244 | 67.99 |
54
I have some code that I would like to document using doxygen in the same style
as the VXL code (with the intention of maybe contributing some of it at some
point). The doxygen manual doesn't discuss the "\\:" format that seems to be
used. Could someone explain how I can use the same style, or maybe send me a
sample config file?
Thanks,
Nick Hurlburt
__________________________________________________
Do You Yahoo!?
Yahoo! Health - your guide to health and wellness
Are there example images along with the correspondence data
for this program?
Regards.
Ming.
Or you can say
> {
> bigmatrix A;
> ...little memory left...
A = matrix; // create zero-size temporary and assign to A.
> ...memory is back...
> }
> there seems to be a bug in vepl_gaussian_convolution. The execution of
> the following simple program leads to a segmentation fault.
I have occasionally seen a similar problem, and it occurs on the line
double* buf = new double[width*height];
inside the "apply_op" (core implementation) of vipl_gaussian_convolution.
At first sight I do not see an error in the program around that line.
Could somebody have a look at it? Maybe run purify on it?
Peter.
Hi,
there seems to be a bug in vepl_gaussian_convolution. The execution of
the following simple program leads to a segmentation fault.
#include <vepl/vepl_gaussian_convolution.h>
#include <vil/vil_load.h>
#include <vil/vil_save.h>
int main()
{
// The input image:
vil_image in = vil_load("alesi.pgm");
// The filter:
vil_image out = vepl_gaussian_convolution(in);
// Write output:
vil_save(out, "out.pnm", "pnm");
return 0;
}
Has anybody ever tested/used this function or knows what the problem
could be?
Thanks in advance for any help,
David
>.
It's a general memory management issue. If you delcare the matrix as a
local variable, the memory will be reclaimed when the variable goes
out of scope:
{
bigmatrix A;
...little memory left...
}
...memory is back...
Otherwise, you will need to manage your own memory:
bigmatrix* A = new bigmatrix;
...
delete A;
...
Calling the destructor directly is almost always the wrong thing to do.
Amitha.
Hi,
I tried to calculate the gradient in x-direction of a gray image with
vil_convolve_1d_x. The output that I get is all over the place equal to
zero
(i.e. black image). The kernel that I use is [1, 0, -1]. If I use
another
kernel, like [1, 2, 1] (smoothing) I get the desired result. Why does
it
not work properly with the gradient kernel? Is the -1 a problem? In the
file
vil_convolve.h at line 55 there is the following remark:
// *** may not work with kernels which take negative values
but it seems to refer only to the
vil_convolve_boundary_option=vil_convolve_trim, which I do not use.
Here is my code, that is based on vxl/vil/examples/vil_convolve_1d.cxx:
#include <vcl_iostream.h>
#include <vul/vul_sprintf.h>
#include <vil/vil_image.h>
#include <vil/vil_load.h>
#include <vil/vil_save.h>
#include <vil/vil_image_as.h>
#include <vil/vil_memory_image_of.h>
#include <vil/vil_convolve.h>
int main()
{
const int N = 1;
float kernel[2*N+1] = {1, 2, 1};
vil_image I = vil_load("test.pgm");
vil_memory_image_of<float> bytes( vil_image_as_float(I) );
int w = bytes.width();
int h = bytes.height();
vil_memory_image_of<float> tmp (w, h);
vil_convolve_signal_1d<float const> K(kernel, 0, N, 2*N+1);
vil_convolve_1d_x(K,
vil_convolve_signal_2d<const float>
( bytes.row_array(), 0, 0, w, 0, 0, h ),
(float*)0,
vil_convolve_signal_2d<float>
( tmp.row_array(), 0, 0, w, 0, 0, h ),
vil_convolve_no_extend,
vil_convolve_no_extend);
vil_save(vil_image_as_byte(tmp), "out.pnm", "pnm");
return 0;
}
Does anybody has experience with vil_convolve or does anybody ever has
computed an image gradient with vxl (i hope so) and can tell me how to
do it?
Regards,
David
Hi,.
thanks
Dominique
> > post_redraw()
>
> the display on my system right away. I have to move the window, then it
> refreshes.
no, it's post_redraw that actually makes nothing. moving a window
refreshes even without calling it.
anyway i get what i want, it's only to report.
domi
> remove(vgui_soview*)
> clear()
> post_redraw()
Works great, thank you. Just to inform you: post_redraw() doesnt redraw
the display on my system right away. I have to move the window, then it
refreshes.
Domi
There is
remove(vgui_soview*)
and
clear()
which are inherited from vgui_displaybase. I think those will do what
you want.
You can refresh your screen by calling the post_redraw() method.
Amitha.
On Thu, Apr 25, 2002 at 01:42:14PM +0200, Dominique wrote:
>
>
>
> _______________________________________________
> Vxl-users mailing list
> Vxl-users@...
>
I committed some changes to the testing structure of vil, and forgot
to modify the io tests. I'm fixing it now. Thanks for the heads-up.
Amitha.
On Thu, Apr 25, 2002 at 10:42:35AM +0200, Manuel Oetiker wrote:
> Hi
>
> In the VXL version cvs Thu Apr 25 10:41:11 MEST 2002
>
> is the file vil_test.h missing.
>
> vxl/vil/io/tests/test_vil_io.cxx:4: vil/vil_test.h
>
> cheers Manuel
>
Hi
In the VXL version cvs Thu Apr 25 10:41:11 MEST 2002
is the file vil_test.h missing.
vxl/vil/io/tests/test_vil_io.cxx:4: vil/vil_test.h
cheers Manuel
--
--
_______ __________
__ __ \______ /___(_) ker Manuel & SysMgr @ ISG.EE - D-ITET
_ / / / _ \ __/_ / ETH-Zurich tel: +41(0)1-6325302 fax:..1199
/ /_/ // __/ /_ _ / eMail: Manuel Oetiker <moetiker@...>
\____/ \___/\__/ /_/ www:
> ... there seems to exist no interface to the vil or the vgui library.
There is a first attempt to this kind of conversion in the "conversions"
package. See also the inline functions in file mul/mil/mil_convert_vil.h
> 2. I tried also to use the vipl/vepl libraries, but neither
> example_x_gradient.cxx form VEPL nor example_sobel.cxx form VIPL worked.
> As output I got either the input image or a completely black image. Any
> suggestions?
There was indeed a bug in vipl, which has been fixed just yesterday.
So please download the latest version of vipl and it should work.
> Additionally, there is the same interface problem as above, only
> vil_images are accepted as input.
Not really. "vepl" is specialised towards vil images, for ease of use,
but vipl is really generic, it works (in a templated way) on any type
of images. Examples are provided for vil_image (of course), vnl_matrix
and vbl_array_2d, but it is straightforward to e.g. add mil_image
specialisation.
Check the files in tbl/vipl/accessors and in tbl/vipl/*/accessors/
to see what is needed.
Peter.
> vgui_utils::dump_colour_buffer would suit you better. I believe it
YES YES YES that's what I need. And moreover, it worked just right
away...
thank you
Dominique
> But still I think some kind of save_bitmap would be very usefull,
> because print_psfile saves my tableau without antialiasing. The image in
> tableau looks *far* better. So for the moment I am commited to some
> screen grabbers...
If you essentially want to do a screen grab, as opposed to actually
saving the lines and such in the easy2D, perhaps
vgui_utils::dump_colour_buffer would suit you better. I believe it
just "renders" the OpenGL window into a file (via a block of memory),
so you will get whatever is displayed on your screen, including zoom,
etc.
Amitha.
> the diff result is in attachment.
Thanks! I have just inserted the diffs into CVS.
I have actually merged your changes into the existing print_psfile(),
in a backward-compatible way.
Note that now, even if an image tableau is present, the user can
choose not to print the image by passing "0" as width or height.
Could you please check if it works? I have no vgui_easy2D at hand ;-)
Peter.
Hi,
I have two problems:
1. I'd like to use the mil_gaussian_pyramid_builder_2d_general. While I
was able to build the pyramid, my problem now is, that there seems to
exist no interface to the vil or the vgui library. If I want to display
a mil-image (which is what I get from the pyramid) with vgui, I have to
save it to a file and to reload it, which is quite ugly. Why is there
not only one image class which is used by all algorithms that work on
images?
2. I tried also to use the vipl/vepl libraries, but neither
example_x_gradient.cxx form VEPL nor example_sobel.cxx form VIPL worked.
As output I got either the input image or a completely black image. Any
suggestions?
Additionally, there is the same interface problem as above, only
vil_images are accepted as input.
Regards,
David
> You may send me your file, or better yet, the output of "diff -u"
> with the original, and I can CVS commit it for you.
thank you.
the diff result is in attachment.
ciao
Dominique
> Well, I hope you will CVS commit your fixes to the vxl repository?
>
>
> Peter.
eehemmm...
with pleasure but:
1) Due to my limited knowlege of c++ I only made a fast hack. I could
share it in the form of print_psfile_nobg(sizex, sizey, colormode) or
something in that manner which would equal to print_psfile with some
lines removed, other inserted.
2) if you want me to practice CVS with your sources (not too much
experience of CVS)
so?
Dominique
> well honestly in the meantime I copied/pasted and modified the source
> code for print_psfile in vgui_easy2D.cxx.
Well, I hope you will CVS commit your fixes to the vxl repository?
Peter.
> under your easy2D. I'll fix it, but in the mean time
> you have to:
well honestly in the meantime I copied/pasted and modified the source
code for print_psfile in vgui_easy2D.cxx. The image is needed only to
get the size and color type (gray or RGB). It was crashing only because
of this. It works fine after modification.
But still I think some kind of save_bitmap would be very usefull,
because print_psfile saves my tableau without antialiasing. The image in
tableau looks *far* better. So for the moment I am commited to some
screen grabbers...
but thank you, I find print_psfile usefull anyway.
Dominique
Hi again,
I've just tried it out and your right - print_psfile
crashes! But only when there is no image tableau
under your easy2D. I'll fix it, but in the mean time
you have to:
- Use a vgui_image_tableau in the constructor of the
vgui_easy2D.
eg. vgui_image_tableau_new img("d:/kym/image.tif");
vgui_easy2D_new easy(img);
- If you have made your own image tableau derived
from vgui_image_tableau (like in xcv) make sure you
cast it to an image_tableau when making an easy2D
my_image_tableau_new img("d:/kym/image.tif");
vgui_easy2D_new
easy((vgui_image_tableau_sptr)img);
Let me know if this doesn't work,
Karen McGaul
VGG, Oxford University
__________________________________________________
Do You Yahoo!?
Everything you'll ever need on one web page
from News and Sport to Email and Music Charts
> my_tab->print_psfile(filename,
> reduction_factor, print_geom_objects);
thanks for a suggestion, but it crashes. For
cerr << output.c_str() << endl;
tab->print_psfile(output.c_str(), 1, true);
I get:
log.ps
Segmentation fault
I've been trying to debug it but with no success. is it my ignorance or
something in vxl?
dominique
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/vxl/mailman/vxl-users/?viewmonth=200204 | CC-MAIN-2017-26 | refinedweb | 1,859 | 68.06 |
Jun 20, 2007 02:52 AM|benny.evangelista|LINK
Hi everybody
!
I'm working on a web application project for mobile devices, all the pages classes are defined like this:
public class MyPage : System.Web.UI.MobileControls.MobilePage
In the aspx code the pages inherit from "myNamespace.MyPage" and they all have the "register" directive for the "mobile" TagPrefix.
Even if I can insert <mobile: > components in the code view, switching to design view I can't see any of t he controls because "This control only works in pages of type MobilePage"
!!
I tried making the aspx page inherits directly from System.Web.UI.MobileControls.MobilePage, the controls works but the page is no more linked to the codebehind...pretty useless
. . .
The MS reference says that
"If a mobile Web Forms page is totally self-contained, it inherits directly from the MobilePage class. In code-behind scenarios, the page might inherit from a developer-supplied class"
but it doesn't seem to be real...
I'm searching on forums all over the internet...no answers for a problem like mine, I'm afraid I'm wrong in something like project settings or similar, but it's 'just' a web application with mobile pages, nothing more...I've always coded on web apps with 'simple' web pages and never had problems, but now....
byez
Jun 25, 2007 10:32 PM|Zhao Ji Ma - MSFT|LINK
Hi Benny,
What is your page directive? Please specify codefile and inherits class name.
<%@ Page Language="C#" AutoEventWireup="true" CodeFile="DefaultMobile.aspx.cs" Inherits="MyPage" %>
<%@ Register TagPrefix="mobile" Namespace="System.Web.UI.MobileControls" Assembly="System.Web.Mobile" %>
Jun 26, 2007 03:03 AM|benny.evangelista|LINK
Hi Zhao, thank for your reply.
Here's a sample of my code, I hope you can help me
codebehind (Login.aspx.cs):
namespace MCID { public class FormLogin : System.Web.UI.MobileControls.MobilePage {...} }
aspx-side code:
<%@ Page language="c#" Codebehind="Login.aspx.cs" Inherits="MCID.FormLogin" AutoEventWireup="true" EnableSessionState="False" enableViewState="False"%>
<%@ Register TagPrefix="mobile" Namespace="System.Web.UI.MobileControls" Assembly="System.Web.Mobile"%>
The whole project is a Visual Studio 2003 web app imported and converted in 2005, I thought this should be the problem, but I tried to open a brand new project in VS2005 and it was exactly the same...
It seems the only way to make the designer works properly is to make the page directive inherits directly from System.Web.UI.MobileControls.MobilePage, but in this case I have to put all the codebehind in the aspx page...not so pretty....
Any idea??
Mobile Web form Mobile Web Forms Visual Studio 2005 WAP WAP PAGES .Net 2.0 Compact Framework Internet Explorer Mobile Mobile Web Forms Visual Studio 2005 mobile asp.net Microsoft Device Emulator
Jun 26, 2007 07:34 AM|Zhao Ji Ma - MSFT|LINK
Benny,
Still not sure if I fully follow you. Did you mean if you double click controls in Design View to add event handler, Visual Studio add it in the ASPX file instead of code behind?
Here is one of the difference between ASP.NET 1.1 and ASP.NET 2.0 that might be your issue. The class defined in code behind in ASP.NET 2.0 is partial class.
public partial class DefaultMobile : System.Web.UI.MobileControls.MobilePage
You don't need to change the page directive. Hope it helps!
Jun 26, 2007 09:02 AM|benny.evangelista|LINK
Thank Zhao, but I've another problem, and I'm sorry if I couldn't explain myself in the best way....anyway I'll try [:)]
I opened a VS2003 project in VS2005 , followed the conversion wizard and started coding. My pages don't have the "partial" modifier cause, you know, in ASP.NET 1.1 the controls definitions are included in the codebehind file, there's no designer cs file.
Now, when I'm working on the aspx files (all of them are defined as shown in my post above), on the design view the controls are not rendered, I only see the gray warning box indicating that "This control only works in pages of type MobilePage" . But
my aspx page inherits from a class inheriting from MobilePage so I don't know where I'm wrong....It seems the ide doesn't 'recognize' the page as a MobilePage.
Setting the "inherits"attribute in the aspx page in this way
Inherits = "System.Web.UI.MobileControls.MobilePage"make the design view works, but the page is no more bound to its codebehind...obviously.
Jun 26, 2007 10:02 PM|Zhao Ji Ma - MSFT|LINK
Well, this is my converted version:
ASPX
<%@ Register TagPrefix="mobile" Namespace="System.Web.UI.MobileControls" Assembly="System.Web.Mobile" %>
<%@ Page language="c#" Inherits="Mobile2003.MobileWebForm1" CodeFile="MobileWebForm1.aspx.cs" CodeFileBaseClass="System.Web.UI.MobileControls.MobilePage" %>
code-behind:
namespace Mobile2003
{
public partial class MobileWebForm1 : System.Web.UI.MobileControls.MobilePage
{....}
}
The partial keyword has been added correctly by Visual Studio 2005. But there is one more attribute call CodeFileBaseClass in the page directive. The attribute specifies a path to a base class for a page and its associated code-behind class. . This looks like the issue. BTW, I have SP1 applied to both Visual Studio 2003 and Visual Studio 2005. Hope it helps.
Jun 27, 2007 03:45 AM|benny.evangelista|LINK
Thanks a lot! Your suggestion fixed up everything, but the key was in the CodeFile attribute, cause I was using the 'old' CodeBehind attribute, changing it made the design view works. I added CodeFileBaseClass too, but no difference with or without it.
Thank you
bye
6 replies
Last post Jun 27, 2007 03:45 AM by benny.evangelista | https://forums.asp.net/t/1124090.aspx?MobilePage+not+recognized | CC-MAIN-2018-09 | refinedweb | 944 | 58.69 |
TableauSDK and Jupyter PythonThomas Pologruto Sep 27, 2018 7:04 PM
I was attempting to create an extract from Jupyter (Ubunut 16.04/Python 2.7.13) and I see this error, which I dont see when i run from command line. Any ideas what i need to set to make this work?
import tableausdk as tbsdk
import tableausdk.Extract as tde
import tableausdk.Server as tds
tde.ExtractAPI.initialize()
extract = tde.Extract("my_new_file_11.tde")
---------------------------------------------------------------------------
TableauException Traceback (most recent call last)
<ipython-input-107-64322f7fa40c> in <module>()
3 import tableausdk.Server as tds
4 tde.ExtractAPI.initialize()
----> 5 extract = tde.Extract("my_new_file_11.tde")
/opt/python/anaconda/lib/python2.7/site-packages/tableausdk/Extract.pyc in __init__(self, path)
592
593 if int(ret) != int(Types.Result.SUCCESS):
--> 594 raise Exceptions.TableauException(ret, Exceptions.GetLastErrorMessage())
595
596 def close(self):
TableauException: TableauException (40200): server did not call us back
I see the log writesthis:
pid=31820----------------------------------
Creating a connection to tde server
Starting new server instance
DataEngine log file will be written to "DataExtract.log"
Launching tdeserver at "tdeserver64"
server callback endpoint: tab.tcp://127.0.0.1:
server listen endpoint: tab.tcp://:auto
Closing server (no descriptor available).
Server was successfully closed
1. Re: TableauSDK and Jupyter PythonSuraj Kumar Sep 27, 2018 11:12 PM (in response to Thomas Pologruto)1 of 1 people found this helpful
Hi Thomas,
First of All, Please do refer to these links :
1. TableauException (40200): server did not call us back
( 2 is the mostly likely to happen)
2. python - Tableau SDK TableException (40200) - Stack Overflow
and check the documentation of tde.Extract function
>>> help( tde.Extract)
You might be already familiar with this.
Also Please Refer :
Troubleshooting with the Tableau SDK
Please do let me know if the problem persists.
Regards,
Suraj
2. Re: TableauSDK and Jupyter PythonThomas Pologruto Sep 28, 2018 5:23 AM (in response to Suraj Kumar)
Thanks for the reply!
It seems to be specifically an issue from within Jupyter. Is there a way to configure the server/host/port in the initialize?
I got it to run from command line fine | https://community.tableau.com/thread/283282 | CC-MAIN-2019-09 | refinedweb | 353 | 60.72 |
Hey guys and girls. I am working on a project that needs to use templates. My professor didnt go over teh use of them in class, nor is our book very descriptive of them. Im assuming they are way easier than i make them seem, but some help , tips, etc would be very helpful.
My assignment is to make a doubly linked list, using templates etc. I have a basic skeleton for the code which is all placed within a single .h file. (We have a driver program given to use for use with our code as the main.cpp file).
//BASIC HEADER FILES HERE template <class T> struct DNode { Deque(const T &); T * item, * prev, * next; }; template <class T> class Deque { private: T * qFront, * qRear, * data; public: Deque(); bool empty(); }; /**************************************************************** FUNCTION: Deque() ARGUMENTS: int RETURNS: None NOTES: this is the Deque constructor ****************************************************************/ template <class T> Deque::Deque() { qFront = qRear = NULL; } /**************************************************************** FUNCTION: Deque() ARGUMENTS: int RETURNS: None NOTES: this is the Deque constructor ****************************************************************/ template <class T> DNode::Deque(const T& item) { prev = next = NULL; data = item; } /**************************************************************** FUNCTION: empty() ARGUMENTS: None RETURNS: bool NOTES: finds out if the deque is empty or not ****************************************************************/ template <class T> bool Deque<T>::empty() { return (qFront == NULL && qRear == NULL); } #endif //END OF DEQUE.H
I have a problem trying to compile this using the driver program in a dev project.
Here is the start code:
int main() { Deque<int> deque1; cout << "deque 1 is "; if (deque1.empty()) cout << "empty" << endl; else cout << "not empty" << endl;
And i get the following error message when attempting to compile.
[Linker error] undefined reference to `Deque<int>::Deque()'
Which means that the Deque<int> deque1 defined in main.cpp cannot find the constructor correct? This being the case, if im correct in thinkin that way, the problem lies with the way the templates work/need to be set. Could you guys give me a hand in sorting out the basics of the templates/and if its not a template problem, point that out too!!!
Thank you so much in advance | https://www.daniweb.com/programming/software-development/threads/120822/templates-how-to | CC-MAIN-2018-30 | refinedweb | 341 | 67.79 |
I am a newbie here. Trying to do my assignment.. Here is my code.
The output I get have a problem.The output I get have a problem.Code:# include <iostream> # include <ctime> # include <string> using namespace std; int main () { int num_player; string name[100]; cout << endl << "\tWelcome to game centre. I am Joe, your game instructor.\n" << "\t\tSnake and Ladder" << endl << endl << " Please enter the number of player [maximum 4 players]:" ; cin >> num_player; cout<< endl; cout << " Enter the " << num_player << " player's name: " << endl; for ( int j= 1; j <= num_player ; j++ ) { cout << j << ". " ; getline(cin,name[j]); } cout << endl; system("PAUSE"); return 0; }
System don't allow me to input for name[1];
I don't understand why~
Is it the getline problem?
can someone fix it for me? T^T | https://cboard.cprogramming.com/cplusplus-programming/152554-i-used-getline-array-its-size-determine-using-pointer-cant~-y.html | CC-MAIN-2017-13 | refinedweb | 132 | 76.42 |
Our goal is guide an imaginary coach in his decision about which play to call.
In basketball, shots beyond a certain distance are worth three points while inside shots are worth only two points. The question we will ask is when should a team go for the three and when should it be happy with the two.
This depends on many factors. We will focus on three:
(i) the chance that the team that takes a given shot gets the basket;
(ii) the chance that the team that takes a shot gets to take the next shot (either by getting the rebound or by stealing from the other side); and
(iii) whether the game is played with "winner-takes-out" or "loser-takes-out"
Let's say that your team's probability of hitting an inside shot is p2 and the probability of hitting an outside shot is p3 where p2 > p3. Naively, you might think it is better to shoot a three point shot when 3*p3 > 2*p2, but that depends on the take-out rule.
In informal games, there are two ways to play:
(i) "winner-takes-out" -- the team that has just made the basket takes the ball out again; and
(ii) "loser-takes-out" -- the opponents of the team that made the basket take the ball out.
Warm-up 1: Consider a game up to three. Suppose that after a missing shot by one team, the other team always makes the next shot (i.e., no steals and no offensive rebounds).. End of Warm-Up 1..
Let's call the team with initial possession A and the other team B. Team A will get the next basket in the next shot with probability p3. But A can get the next basket also if it missess). See figure And so on. makes the programming easier when we deal with more general cases. End of warm-up 2.
How do we deal with games having more points? For this we use recursion. Suppose again that both teams shoot only three point shots under loser-takes-out. Let probAwin(Aposs, x, y) mean "the probability that A wins when A has possession, A has x points to win, and B has y points to win." Then we have:
probAwin(Aposs, x, y) = (p3 * probAwin(Bposs, x-3,y)) + (1-p3)*probAwin(Bposs, x, y).
The first term corresponds to the case in which team A hits the three, so has only x-3 points to go, but B has possession because it is loser-take-out. The second term is the case in which A misses the three (with probability 1 - p3), but now the question is what is the probability that A can win (and I mean A) if B has possession, A has x points to go, and B has y points to go.
Whereas this formulation is correct, it incorporates neither a point nor a probability cutoff, so it will never stop. The point cutoff is easy: when either x or y is zero (or negative), the game is over. For the probability cutoff, we fold the probability into probAwin. We might call this folded probability "f". That is, probAwin(f, Aposs, x, y) mean "the probability that A wins when A has possession, A has x points to win, B has y points to win, and the folded probability is) return (f * p3 * probAwin(1, Bposs, x-3,y)) + probAwin(f*(1-p3),Bposs, x, y) end
The last line requires some explanation. The first term (f * p3 * probAwin(1, Bposs, x-3,y)) again corresponds to the total contribution to A's winning probability given that f is the folded probability, A has possession, and p3 is the probability that a three point shot will go in. The "1" in the recursive call has to do with the fact that B will be taking it out when A has x-3 points to go and B has y points to go, so the folded probability is 1 for that call. The second term probAwin(f*(1-p3),Bposs, x, y) corresponds to the contribution to A's winning probability given that the case in which A misses in this situation has probability f*(1-p3). For completeness, we should also write the pseudo-code when B has possession (but we are still computing the probability that A will win). == Bposs) return (f * p3 * probAwin(1, Aposs, x,y-3)) + probAwin(f*(1-p3),Aposs, x, y) end
When p3 = 0.7, figure shows some of the possible calls for the loser-takes-out situation.
Warm-up 3: How would the pseudo-code for probAwin(f, Bposs, x, y) change under a winner-takes-out rule?
Solution: The only change is to the first term of the last return statement. Possession would not change, so it should read: return (f * p3 * probAwin(1, Bposs, x,y-3)) + probAwin(f*(1-p3),Aposs, x, y). End of Warm-Up 3.
How do we deal with the case when a team can shoot for two, three, or (if you ask the Harlem Globetrotters) four points? We use the same basic structure of probAwin, but this time we must consider the various shooting options. Because A wants to win, when A has possession, the procedure should consider each possible shooting option and choose the one that gives A the GREATEST probability of winning the whole game. When B has possession, the procedure should again consider each possible shooting option, but should choose the one that gives A the LEAST probability of winning the whole game. In the loser-takes-out case, we would) begin shoot2prob := (f * p2 * probAwin(1, Bposs, x-2,y)) + probAwin(f*(1-p2),Bposs, x, y) shoot3prob := (f * p3 * probAwin(1, Bposs, x-3,y)) + probAwin(f*(1-p3),Bposs, x, y) return max(shoot2prob, shoot3prob) end if(whohas == Bposs) begin shoot2prob := (f * p2 * probAwin(1, Aposs, x,y-2)) + probAwin(f*(1-p2),Aposs, x, y) shoot3prob := (f * p3 * probAwin(1, Aposs, x,y-3)) + probAwin(f*(1-p3),Aposs, x, y) return min(shoot2prob, shoot3prob) end end
Finally, we deal with the question of efficiency. Suppose we play a game to 15. There may be many ways to arrive at the state when player A has, say, 7 points to go, player B has 5 to go and A has possession. Instead of recomputing the probability of that situation each time, we keep a table that holds all these "end-game" answers. So, to compute the probability that team A wins a game up to some number x, the dynamic programming approach calculates the probability that A wins all shorter games first.
Your task is to design a coach. You will be playing against another coach. You will be given a probability of a two point shot and a probability of a three point shot. Each team will be given a different p3 and p2, but they will have the properties that p3 will always be less than p2 and the sum of the expected values for the team having initial possession will equal the sum for the other team. For example, the team having initial possession may have p2 = 0.8 and p3 = 0.6 so the sum of the expected values is 1.6 + 1.8 = 3.4. The non-initiating team in that case may have p2 = 0.7 and p3 = 0.6667 thus having sum of expected values of 1.4 + 2 = 3.4. You will also be given the number of points in the game (e.g. 11 or 15). Finally, you will be told whether the game is winner-takes-out or loser-takes-out.
Each time you have possession, you can decide whether your team will attempt a two or three point shot. The architect will use a random number generator (with a seed I provide) to determine which shots make it into the basket and which don't. You will play two games with each opponent. In the first game, one of you has initial possession. In the second game, the other does. (The probabilities depend on which team is initiating.) Your score will be the total number of points you win over the two games. The winner of each competition is the person with the highest score.
Receive shot choice from team in possession of the ball. Returns result of the shot based on probabilities and then asks the appropriate team (depending on whether we are playing winner-takes-out or loser-takes-out) for its shot choice. You will keep score and ensure that each player's program stays within the two minute limit. | http://cs.nyu.edu/courses/Fall12/CSCI-GA.2965-001/basketball.html | CC-MAIN-2014-42 | refinedweb | 1,453 | 70.02 |
[ Updated for UWP apps on Windows 10. For Windows 8.x articles, see the archive ]
This topic describes the lifecycle of a Universal Windows Platform (UWP) app from the time it is launched until it is closed.
A little history
Before Windows 8, apps had a simple lifecycle. Win32 and .NET apps are either running or not running. When a user minimizes them, or switches away from them, they continue to run. This was fine until portable devices and power management became increasingly important.
Windows 8 introduced a new application model with Windows Store apps. At a high level, a new suspended state was added. A Windows Store app is suspended shortly after the user minimizes it or switches to another app. This means that the app's threads are stopped and the app is left in memory unless the operating system needs to reclaim resources. When the user switches back to the app, it can be quickly restored to a running state.
There are various ways for apps that need to continue to run when they are in the background such as background tasks, extended execution, and activity sponsored execution (for example, the BackgroundMediaEnabled capability which allows an app to continue to play media in the background). Also, background transfer operations can continue even if your app is suspended or even terminated. For more info, see How to download a file.
By default, apps that are not in the foreground are suspended which results in power savings and more resources available for the app currently in the foreground.
The suspended state adds new requirements for you as a developer because the operating system may elect to terminate a suspended app in order to free up resources. The terminated app will still appear in the task bar. When the user click on it, the app must restore the state that it was in before it was terminated because the user will not be aware that the system closed the app. They will think that it has been waiting in the background while they were doing other things and will expect it to be in the same state it was in when they left it. In this topic we will look at how to accomplish that.
Windows 10, version 1607, introduces two more app model states: Running in foreground and Running in background. We will also look at these new states in the sections that follow.
App execution state
This illustration represents the possible app model states starting in Windows 10, version 1607. Let's walk through the typical lifecycle of a Windows Store app.
Apps enter the running in background state when they are launched or activated. These terms seem similar but they refer to different ways the operating system may start your app. Let's first look at launching an app.
App launch
The OnLaunched method is called when an app is launched. It is passed a LaunchActivatedEventArgs parameter which provides, among other things, the arguments passed to the app, the identifier of the tile that launched the app, and the previous state that the app was in.
Get the previous state of your app from LaunchActivatedEventArgs.PreviousExecutionState which returns an ApplicationExecutionState. Its values and the appropriate action to take due to that state are as follows:
Note Current user session is based on Windows logon. As long as the current user hasn't logged off, shut down, or restarted Windows, the current user session persists across events such as lock screen authentication, switch-user, and so on.
One important circumstance to be aware of is that if the device has sufficient resources, the operating system will prelaunch frequently used apps that have opted in for that behavior in order to optimize responsiveness. Apps that are prelaunched are launched in the background and then quickly suspended so that when the user switches to them, they can be resumed which is faster than launching the app.
Because of prelaunch, the app’s OnLaunched() method may be initiated by the system rather than by the user. Because the app is prelaunched in the background you may need to take different action in OnLaunched(). For example, if your app starts playing music when launched, they will not know where it is coming from because the app is prelaunched in the background. Once your app is prelaunched in the background, it is followed by a call to Application.Suspending. Then, when the user does launch the app, the resuming event is invoked as well as the OnLaunched() method. See Handle app prelaunch for additional information about how to handle the prelaunch scenario. Only apps that opt-in are prelaunched.
Windows displays a splash screen for the app when it is launched. To configure the splash screen, see Adding a splash screen.
While the splash screen is displayed, your app should register event handlers and set up any custom UI it needs for the initial page. See that these tasks running in the application’s constructor and OnLaunched() are completed within a few seconds or the system may think your app is unresponsive and terminate it. If an app needs to request data from the network or needs to retrieve large amounts of data from disk, these activities should be completed outside of launch. An app can use its own custom loading UI or an extended splash screen while it waits for long running operations to finish. See Display a splash screen for more time and the Splash screen sample for more info.
After the app completes launching, it enters the Running state and the splash screen disappears and all splash screen resources and objects are cleared.
App activation
In contrast to being launched by the user, an app can be activated by the system. An app may be activated by a contract such as the share contract. Or it may be activated to handle a custom URI protocol or a file with an extension that your app is registered to handle. For a list of ways your app can be activated, see ActivationKind.
The Windows.UI.Xaml.Application class defines methods you can override to handle the various ways your app may be activated. OnActivated can handle all possible activation types. However, it's more common to use specific methods to handle the most common activation types, and use OnActivated as the fallback method for the less common activation types. These are the additional methods for specific activations:
OnCachedFileUpdaterActivated
OnFileActivated
OnFileOpenPickerActivated OnFileSavePickerActivated
OnSearchActivated
OnShareTargetActivated
The event data for these methods includes the same PreviousExecutionState property that we saw above, which tells you which state your app was in before it was activated. Interpret the state and what you should do it the same way as described above in the App launch section.
Note If you log on using the computer's Administrator account, you can't activate UWP apps.
Running in the background
Starting with Windows 10, version 1607, apps can run background tasks within the same process as the app itself. Read more about it in Background activity with the Single Process Model. We won't go into in-process background processing in this article, but how this impacts the app lifecycle is that two new events have been added related to when your app is in the background. They are: EnteredBackground and LeavingBackground.
These events also reflect whether the user can see your app's UI.
Running in the background is the default state that an application is launched, activated, or resumed into. In this state your application UI is not visible yet.
Running in the foreground
Running in the foreground means that your app's UI is visible.
The LeavingBackground event is fired just before your application UI is visible and before entering the running in foreground state. It also fires when the user switches back to your app.
Previously, the best location to load UI assets was in the Activated or Resuming event handlers. Now LeavingBackground is the best place to verify that your UI is ready.
It is important to check that visual assets are ready by this time because this is the last opportunity to do work before your application is visible to the user. All UI work in this event handler should complete quickly, as it impacts the launch and resume time that the user experiences. LeavingBackground is the time to ensure the first frame of UI is ready. Then, long-running storage or network calls should be handled asynchronously so that the event handler may return.
When the user switches away from your application, your app reenters the running in background state.
Reentering the background state
The EnteredBackground event indicates that your app is no longer visible in the foreground. On the desktop EnteredBackground fires when your app is minimized; on phone, when switching to the home screen or another app.
Reduce your app's memory usage
Since your app is no longer visible to the user, this is the best place to stop UI rendering work and animations. You can use LeavingBackground to start that work again.
If you are going to do work in the background, this is the place to prepare for it. It is best to check MemoryManager.AppMemoryUsageLevel and, if needed, reduce the amount of memory being used by your app when it is running in the background so that your app doesn't risk being terminated by the system to free up resources.
See Reduce memory usage when your app moves to the background state for more details.
Save your state
The suspending event handler is the best place to save your app state. However, if you are doing work in the background (for example, audio playback, using an extended execution session or in-proc background task), it is also a good practice to save your data asynchronously from your EnteredBackground event handler. This is because it is possible for your app to be terminated while it is at a lower priority in the background. And because the app will not have gone through the suspended state in that case, your data will be lost.
Saving your data in your EnteredBackground event handler, before background activity begins, ensures a good user experience when the user brings your app back to the foreground. You can use the application data APIs to save data and settings. For more info, see Store and retrieve settings and other app data.
After you save your data, if you are over your memory usage limit, then you can release your data from memory since you can reload it later. That will free memory that can be used by the assets needed for background activity.
Be aware that if your app has background activity in progress that it can move from the running in the background state to the running in the foreground state without ever reaching the suspended state.
Asynchronous work and Deferrals.
If you need more time to save your state, investigate ways to save your state in stages before your app enters the background state so that there is less to save in your EnteredBackground event handler. Or you may request an ExtendedExecutionSession to get more time. There is no guarantee that the request will be granted, however, so it is best to find ways to minimize the amount of time you need to save your state.
App suspend
When the user minimizes an app Windows waits a few seconds to see whether the user will switch back to it. If they do not switch back within this time window, and no extended execution, background task, or activity sponsored execution is active, Windows suspends the app. An app is also suspended when the lock screen appears as long as no extended execution session, etc. is active in that app.
When an app is suspended, it invokes the Application.Suspending event. Visual Studio’s UWP project templates provide a handler for this event called OnSuspending in App.xaml.cs. Prior to Windows 10, version 1607, you would put the code to save your state here. Now the recommendation is to save your state when you enter the background state, as described above..
Be aware of the deadline
In order to ensure a fast and responsive device, there is a limit for the amount of time you have to run your code in your suspending event handler. It is different for each device, and you can find out what it is using a property of the SuspendingOperation object called the deadline.
As with the EnteredBackground event handler, if you make an asynchronous call from your handler, control returns immediately from that asynchronous call. That means that execution can then return from your event handler and your app will move to the suspend state even though the asynchronous call hasn't completed yet. Use the GetDeferral method on the SuspendingOperation object (available via the event args) to delay entering the suspended state until after you call the Complete method on the returned SuspendingDeferral object.
If you need more time, you may request an ExtendedExecutionSession. There is no guarantee that the request will be granted, however, so it is best to find ways to minimize the amount of time you need in your Suspended event handler.
App terminate
The system attempts to keep your app and its data in memory while it's suspended. However, if the system does not have the resources to keep your app in memory, it will terminate your app. Apps don't receive a notification that they are being terminated, so the only opportunity you have to save your app's data is in your OnSuspension event handler, or asynchronously from your EnteredBackground handler.
When your app determines that it has been activated after being terminated, it should load the application data that it saved so that the app is in the same state it was in before it was terminated. When the user switches back to a suspended app that has been terminated, the app should restore its application data in its OnLaunched method. The system doesn't notify an app when it is terminated, so your app must save its application data and release exclusive resources and file handles before it is suspended, and restore them when the app.
App resume
A suspended app is resumed when the user switches to it or when it is the active app when the device comes out of a low power state.
When an app is resumed from the Suspended state, it enters the Running in background state and the system restores the app where it left off so that it appears to the user as if it has been running all along. No app data stored in memory is lost. Therefore, most apps don't need to restore state when they are resumed though they should reacquire any file or device handles that they released when they were suspended, and restore any state that was explicitly released when the app was suspended.
You app may be suspended for hours or days. If your app has content or network connections that may have gone stale, these should be refreshed when the app resumes. If an app registered an event handler for the Application.Resuming event, it is called when the app is resumed from the Suspended state. You can refresh your app content and data in this event handler.
If a suspended app is activated to participate in an app contract or extension, it receives the Resuming event first, then the Activated event.
If the suspended app was terminated, there is no Resuming event and instead OnLaunched() is called with an ApplicationExecutionState of Terminated. Because you saved your state when the app was suspended, you can restore that state during OnLaunched() so that your app appears to the user as it was when they switched away from it.
While an app is suspended, it does not receive any network events that it registered to receive. These network events are not queued--they are simply missed. Therefore, your app should test the network status when it is resumed.
Note Because the Resuming event is not raised from the UI thread, a dispatcher must be used if the code in your resume handler communicates with your UI. See Update the UI thread from a background thread for a code example of how to do this.
For general guidelines, see Guidelines for app suspend and resume.
App close
Generally, users don't need to close apps, they can let Windows manage them. However, users can choose to close an app using the close gesture or by pressing Alt+F4 or by using the task switcher on Windows Phone.
There is not an event to indicate that the user closed the app. When an app is closed by the user, it is first suspended to give you an opportunity to save its state. In Windows 8.1 and later, after an app has been closed by the user, the app is removed from the screen and switch list but not explicitly terminated.
Closed-by-user behavior:.
We recommend that apps not close themselves programmatically unless absolutely necessary. For example, if an app detects a memory leak, it can close itself to ensure the security of the user's personal data.
App crash
The system crash experience is designed to get users back to what they were doing as quickly as possible. You shouldn't provide a warning dialog or other notification because that will delay the user.
If your app crashes, stops responding, or generates an exception, a problem report is sent to Microsoft per the user's feedback and diagnostics settings. the Documents or Pictures libraries.
App lifecycle and the Visual Studio project templates
The basic code that is relevant to the app lifecycle is provided in the Visual Studio project templates. The basic app handles launch activation, provides a place for you to restore your app data, and displays the primary UI even before you've added any of your own code. For more info, see C#, VB, and C++ project templates for apps.
Key application lifecycle APIs
- Windows.ApplicationModel namespace
- Windows.ApplicationModel.Activation namespace
- Windows.ApplicationModel.Core namespace
- Windows.UI.Xaml.Application class (XAML)
- Windows.UI.Xaml.Window class (XAML)
Note
This article is for Windows 10 developers writing Universal Windows Platform (UWP) apps. If you’re developing for Windows 8.x or Windows Phone 8.x, see the archived documentation. | https://docs.microsoft.com/en-us/windows/uwp/launch-resume/app-lifecycle | CC-MAIN-2017-34 | refinedweb | 3,077 | 61.16 |
Subgraph: This Security-Focused Distro Is Malware’s Worst Nightmare
subgraphos.png.
See: Linux Malware on the Rise: A Look at Recent Threats
With the Linux desktop popularity on the rise, you can be sure desktop malware and ransomware attacks will also be on the increase. That means Linux users, who have for years ignored such threats, should begin considering that their platform of choice could get hit.
What do you do?
If you’re a Linux desktop user, you might think about adopting a distribution like Subgraph. Subgraph is a desktop computing and communication platform designed to be highly resistant to network-borne exploits and malware/ransomware attacks. But unlike other platforms that might attempt to achieve such lofty goals, Subgraph makes this all possible, while retaining a high-level of user-friendliness. Thanks to the GNOME desktop, Subgraph is incredibly easy to use.
What Subgraph does differently
It all begins at the core of the OS. Subgraph ships with a kernel built with grsecurity/PaX (a system-wide patch for exploit and privilege escalation mitigation), and RAP (designed to prevent code-reuse attacks on the kernel to mitigate against contemporary exploitation techniques). For more information about the Subgraph kernel, check out the Subgraph kernel configs on GitHub.
Subgraph also runs exposed and vulnerable applications within unique environments, known as Oz. Oz is designed to isolate applications from one another and only grant resources to applications that need them. The technologies that make up Oz include:
Linux namespaces
Restricted file system environments
Desktop isolation
Seccomp and Berkeley Packet Filter (bpf)
Other security features include:
Most of the custom Subgraph code is written in the memory-safe language, Golang.
AppArmor profiles that cover many system utilities and applications.
Security event monitor.
Desktop notifications (coming soon).
Roflcoptor tor control port filter service.
Installing Subgraph
It is important to remember that Subgraph is in alpha release, so you shouldn’t consider this platform as a daily driver. Because it’s in alpha, there are some interesting hiccups regarding the installation. The first oddity I experienced is that Subgraph cannot be installed as a VirtualBox virtual machine. No matter what you do, it will not work. This is a known bug and, hopefully, the developers will get it worked out.
The second issue is that installing Subgraph by way of a USB device is very tricky. You cannot use tools like Unetbootin or Multiboot USB to create a bootable flash drive. You can use GNOME Disks to create a USB drive, but your best bet is the dd command. Download the ISO image, insert your USB drive into the computer, open a terminal window, and locate the name of the newly inserted USB device (the command lsblk works fine for this. Finally, write the ISO image to the USB device with the command:
dd bs=4M if=subgraph-os-alpha_XXX.iso of=/dev/SDX status=progress && sync
where XXX is the Subgraph release number and SDX is the name of your USB device.
Once the above command completes, you can reboot your machine and install Subgraph. The installation process is fairly straightforward, with a few exceptions. The first is that the installation completely erases the entire drive, before it installs. This is a security measure and cannot be avoided. This process takes quite some time (Figure 1), so let it do its thing and go take care of another task.
subgraph1.jpg
Next, you must create a passphrase for the encryption of the drive (Figure 2).
subgraph2.jpg
This passphrase is used when booting your device. If you lose (or forget) the passphrase, you won’t be able to boot into Subgraph. This passphrase is also the first line of defence against anyone who might try to get to your data, should they steal your device… so choose wisely.
The last difference between Subgraph and most other distributions, is that you aren’t given the opportunity to create a username. You do create a user password, which is used for the default user… named user. You can always create a new user (once the OS is installed), either by way of the command line or the GNOME Settings tool.
Once installed, your Subgraph system will reboot and you’ll be prompted for the disk encryption passphrase. Upon successful authentication, Subgraph will boot and land on the GNOME login screen. Login with username user and the password you created during installation.
Usage
There are two important things to remember when using Subgraph. First, as I mentioned earlier, this distribution is in alpha development, so things will go wrong. Second, all applications are run within sandboxes and networking is handled through Tor, so you’re going to experience slower application launches and network connections than you might be used to.
I was surprised to find that Tor Browser (the default—and only installed—browser) wasn’t installed out of the box. Instead, there’s a launcher on the GNOME Dash that will, upon first launch, download the latest version. That’s all fine and good, but the download and install failed on me twice. Had I been working through a regular network connection, this wouldn’t have been such a headache. However, as Subgraph was working through Tor, my network connection was painfully slow, so the download, verification, and install of Tor Browser (a 26.8 MB package) took about 20 minutes. That, of course, isn’t the fault of Subgraph but of the Tor network to which I was connected. Until Tor Browser was up and running, Subgraph was quite limited in what I could actually do. Eventually, Tor Browser downloaded and all worked as expected.
Application sandboxes
Not every application has to go through the process of downloading a new version upon first launch. In fact, Tor Browser was the only application I encountered that did. When you do open up a new application, it will first start its own sandbox and then open the application in question. Once the application is up and running, you will see a drop-down in the top panel that lists each current application sandbox (Figure 3).
subgraph3.jpg
From each application sub-menu, you can add files to that particular sandbox or you can shutdown the sandbox. Shutting down the sandbox effectively closes the application. This is not how you should close the application itself. Instead, close the application as you normally would and then, if you’re done working with the application, you can then manually close the sandbox (through the drop-down). If you have, say, LibreOffice open and you close it by way of closing the sandbox, you run the risk of losing information.
Because each application starts up in its own sandbox, applications don’t open as quickly as they would otherwise. This is the tradeoff you make for using Subgraph and sandboxes. For those looking to get the most out of desktop security, this is a worthwhile exchange.
A very promising distribution
For anyone hoping to gain the most security they can on a desktop computer, Subgraph is one seriously promising distribution. Although it does suffer from many an alpha woe, Subgraph looks like it could make some serious waves on the desktop—especially considering how prevalent malware and ransomware has become. Even better, Subgraph could easily become a security-focused desktop distribution that anyone (regardless of competency) could make use of. Once Subgraph is out of alpha, I predict big things from this unique flavor of Linux.
Learn more about Linux through the free "Introduction to Linux" course from The Linux Foundation and edX.
-
- Log in or register to post comments
- Print This
- Like (2 likes) | https://www.linux.com/learn/intro-to-linux/2018/1/subgraph-security-focused-distro-malwares-worst-nightmare | CC-MAIN-2018-09 | refinedweb | 1,276 | 54.63 |
Infinispan query - null values in the query resultsJithendra reddy Feb 22, 2013 7:00 PM
Hi,
I have been trying to get the infinispan querying work for my POC. I am using the infinispan 5.1.8.Final for my implementation.
I have an object called QCInventory.java which is indexed and has @providedId annotation.
There are some fields in the pojo which are annotated with @Field and all of them are set to no anlyzing. I dont want to be analyzed as my requirement is to match the values as provided in the input.
I ran some queries by putting 10 QCInventory.java objects in cache with the key as the serviceId ( a field value in QCInventory pojo and is unique for each pojo) and value as the corresponding QCInventory object.
When the query is serviceId:123485 which should match exactly one pojo, the result size of CacheQuery is 1 but when i try to display that match using .list() method, it gives me a null value.
Similarly when the query is enterpriseId:24769 ( all the pojos in cache has the same enterpriseId value), the result size is 10, but when i list them using the .list() method, 7 of them are null values and 3 of them are the pojos.
Need to know why am i getting those null values as part of the .list() method. The QcInventory pojo looks like this.
@Indexed
@ProvidedId
public class Qcinventory implements Serializable{
/**
*
*/
private static final long serialVersionUID = 146546358216584L;
@Field(analyze = Analyze.NO) String serviceId;
@Field(analyze = Analyze.NO) String enterpriseId;
}
I am calling the query like this:
SearchManager sm = Search.getSearchManager(cache);
QueryBuilder qb = sm.buildQueryBuilderForClass(Qcinventory.class).get();
Query q = new QueryParser(Version.LUCENE_35, "serviceId", new StandardAnalyzer(Version.LUCENE_35)).parse("serviceId:123485");
CacheQuery cq = sm.getQuery(q, Qcinventory.class);
//then loop through the objects from cq.list()
Any kind of suggestion or help will be highly appreciated.
We are evaluating infinispan querying to be used for our inventory searching capabilities. A major decision is pending on the outcome of this POC. So, expecting help as soon as possible.
1. Re: Infinispan query - null values in the query resultsSanne Grinovero Feb 28, 2013 7:29 AM (in response to Jithendra reddy)
Hi,
since you need exact matches, you are correct you should not Analyze the field, but you need to not apply the analyzer to the Query too.
The QueryParser API expects an Analyzer, the easiest solution is to avoid the parser and use a TermQuery
TermQuery q = new TermQuery( new Term( "serviceId", "123485" ) );
CacheQuery cq = sm.getQuery(q, Qcinventory.class);
2. Re: Infinispan query - null values in the query resultsJithendra reddy Feb 28, 2013 7:23 PM (in response to Sanne Grinovero)
Thanks Sanne for your response. I resolved it. It was my bad.
The null values were coming becuase those cache entries were no longer in cache, but were available in the index. It was because of my max entries in the cache configuration being kept low. After i have increased them to a considerable amount, i no longer have the issue.
Jithendra | https://developer.jboss.org/thread/221655 | CC-MAIN-2018-39 | refinedweb | 514 | 57.57 |
I recently had an master / details scenario where the details popup was much larger than the master control. If I all the way down in the details popup and then returned to the master control, the browser would maintain the scrolling position and the master control would no longer be visible unless I scrolled up. Fortunately Silverlight provides a mechanism for calling JavaScript functions. ;)
Steps:
1. Add a top anchor tag to the top of the HTML page rendering your Silverlight object.
<a name=”top” />
2. Add a JavaScript function to the same page.
function ScrollToTop { location.href=”#top”; }
3. Use the following code to call from Silverlight:
using System.Windows.Browser;
…
HtmlPage.Window.Invoke(“ScrollToTop”);
Note: I tried using “windows.scroll(0, 0)” in my JavaScript function to no avail, but using an anchor tag worked just fine.
I tried windows.scroll(0,0) to, but it ignored me. When I try your solution, I get an error in the debugger. Any idea what causes it?
Navigation is only supported to relative URIs that are fragments, or begin with '/' or which contain ';component/'.
I haven't fully tried this, but you have a syntax error in the function declaration. You need () after the name, then the squiggly bracket.
function ScrollToTop ()
{
} | http://johnlivingstontech.blogspot.com/2010/04/forcing-browser-to-scroll-to-top-using.html | CC-MAIN-2018-30 | refinedweb | 211 | 68.16 |
SFTP connectivity using HCI/HCP Integration services
Dear SCN Friends,
I would like to share knowledge on SFTP connectivity using HCI/HCP Integration services.I’m assuming most of you might have already worked on it in PI/PO.So I’m going to focus only on key areas where we expect some guidance .
HCI acts as SFTP client in both pull and push .
SFTP adapter uses the SSH protocol to transfer files.So to mutually identify who is SFTP server and who is SFTP client ,need to exchange public keys/certificates between client and server.
Below are the steps for certificate exchange between HCI and SFTP server.
1)Request SAP to share SFTP client public key .
2)SAP will generate public and private key and store it in HCI key store and share public key to you.
3)Share above public key to team who is hosting SFTP server.
4)Request SFTP server public key,host name and public key algorithm from the team who is hosting SFTP server .
5)Share above public key to SAP ,they will store in known_host file .
Inbound:
Outbound:
If anybody wondering are these steps valid for sender or receiver ? Here is the answer .This setup is enough for both the both directions 🙂
Now we can jump into Iflow build.Now you might be wondering does HCI/SFTP adapter supports all options that we have in PI/PO ? No,but it supports all basic features that we expect in SFTP adapter.
For below requirements,you might feel bit difficult.
1)I want to write output file name same as input filename .Is it possible ? If yes ,How ?
Ans:Yes .It’s possible.Keep filename blank.
2)I want to read my input filename dynamically.Is it possible ? If yes ,How ?
3)I want to write output filename dynamically.Is it possible ? If yes ,How ?
Ans:Yes.It’s possible.Below is an example for your reference.
Use case:I need to set my output file as Inputfilename.xml by reading input file name
My input filename is CPSalesAll.csv , I need to set it to CPSalesAll.xml
I’m using below piece of code in a groovy script to read and set filename dynamically .
def OldFileName = message.getHeaders(); def value = OldFileName.get("CamelFileName").replace('csv','xml'); message.setHeader("CamelFileName",value);
4) Does HCI/SFTP sender polling flexible enough to read files ?
Yes.
5)What Authentication modes does HCI/SFTP adapter supports?
Basic Auth and Certificate based Auth
Rest of the SFTP adapter configurations are pretty straight forward .Just go through SAP HCI documentation if you stuck with any doubts.
Regards,
Venkat
Useful info.
Good blog.
I have one question. Where to place the above script on HCI Integration flow to change the file name extension dynamically?
How can I read file dynamically based on filename,for eg. i have multiple files for order at server and i need to add them back as an attachment to respective order,so how can i get the respective file. | https://blogs.sap.com/2016/07/10/sftp-connectivity-using-hcihcp-integration-services/ | CC-MAIN-2021-49 | refinedweb | 502 | 69.48 |
Cosine Similarity Explained using Python
Want to share your content on python-bloggers? click here.
In this article we will discuss cosine similarity with examples of its application to product matching in Python.
Table of Contents:
- Introduction
- Cosine Similarity (Overview)
- Product Similarity using Python (Example)
- Conclusion
Introduction
A lot of interesting cases and projects in the recommendation engines field heavily relies on correctly identifying similarity between pairs of items and/or users.
There are several approaches to quantifying similarity which have the same goal yet differ in the approach and mathematical formulation.
In this article we will explore one of these quantification methods which is cosine similarity. And we will extend the theory learnt by applying it to the sample data trying to solve for user similarity.
The concepts learnt in this article can then be applied to a variety of projects: documents matching, recommendation engines, and so on.
Cosine Similarity (Overview)
Cosine similarity is a measure of similarity between two non-zero vectors. It is calculated as the angle between these vectors (which is also the same as their inner product).
Well that sounded like a lot of technical information that may be new or difficult to the learner. We will break it down by part along with the detailed visualizations and examples here.
Let’s consider three vectors:
$$
\overrightarrow{A} = \begin{bmatrix} 1 \space \space \space 4\end{bmatrix}
$$
$$
\overrightarrow{B} = \begin{bmatrix} 2 \space \space \space 4\end{bmatrix}
$$
$$
\overrightarrow{C} = \begin{bmatrix} 3 \space \space \space 2\end{bmatrix}
$$
From the graph we can see that vector A is more similar to vector B than to vector C, for example.
But how were we able to tell? Well by just looking at it we see that they A and B are closer to each other than A to C. Mathematically speaking, the angle A0B is smaller than A0C.
Formula
Going back to mathematical formulation (let’s consider vector A and vector B), the cosine of two non-zero vectors can be derived from the Euclidean dot product:
$$ A \cdot B = \vert\vert A\vert\vert \times \vert\vert B \vert\vert \times \cos(\theta)$$
which solves for:
$$ Similarity(A, B) = \cos(\theta) = \frac{A \cdot B}{\vert\vert A\vert\vert \times \vert\vert B \vert\vert} $$
Solving for components
Let’s break down the above formula.
Step 1:
We will start from the nominator:
$$ A \cdot B = \sum_{i=1}^{n} A_i \times B_i = (A_1 \times B_1) + (A_2 \times B_2) + … + (A_n \times B_n) $$
where \( A_i \) and \( B_i \) are the \( i^{th} \) elements of vectors A and B.
For our case we have:
$$ A \cdot B = (1 \times 2) + (4 \times 4) = 2 + 16 = 18 $$
Perfect, we found the dot product of vectors A and B.
Step 2:
The next step is to work through the denominator:
$$ \vert\vert A\vert\vert \times \vert\vert B \vert\vert $$
What we are looking at is a product of vector lengths. In simple words: length of vector A multiplied by the length of vector B.
The length of a vector can be computed as:
$$ \vert\vert A\vert\vert = \sqrt{\sum_{i=1}^{n} A^2_i} = \sqrt{A^2_1 + A^2_2 + … + A^2_n} $$
where \( A_i \) is the \( i^{th} \) element of vector A.
For our case we have:
$$ \vert\vert A\vert\vert = \sqrt{1^2 + 4^2} = \sqrt{1 + 16} = \sqrt{17} \approx 4.12 $$
$$ \vert\vert B\vert\vert = \sqrt{2^2 + 4^2} = \sqrt{4 + 16} = \sqrt{20} \approx 4.47 $$
Step 3:
At this point we have all the components for the original formula. Let’s plug them in and see what we get:
$$. Note that this algorithm is symmetrical meaning similarity of A and B is the same as similarity of B and A.
Addition
Following the same steps, you can solve for cosine similarity between vectors A and C, which should yield 0.740.
This proves what we assumed when looking at the graph: vector A is more similar to vector B than to vector C. In the example we created in this tutorial, we are working with a very simple case of 2-dimensional space and you can easily see the differences on the graphs. However, in a real case scenario, things may not be as simple. In most cases you will be working with datasets that have more than 2 features creating an n-dimensional space, where visualizing it is very difficult without using some of the dimensionality reducing techniques (PCA, tSNE).
Product Similarity using Python (Example)
The vector space examples are necessary for us to understand the logic and procedure for computing cosine similarity. Now, how do we use this in the real world tasks?
Let’s put the above vector data into some real life example. Assume we are working with some clothing data and we would like to find products similar to each other. We have three types of apparel: a hoodie, a sweater, and a crop-top. The product data available is as follows:
$$
\begin{matrix}
\text{Product} & \text{Width} & \text{Length} \\
Hoodie & 1 & 4 \\
Sweater & 2 & 4 \\
Crop-top & 3 & 2 \\
\end{matrix}
$$
Note that we are using exactly the same data as in the theory section. But putting it into context makes things a lot easier to visualize. From above dataset, we associate hoodie to be more similar to a sweater than to a crop top. In fact, the data shows us the same thing.
To continue following this tutorial we will need the following Python libraries: pandas and sklearn.
If you don’t have it installed, please open “Command Prompt” (on Windows) and install it using the following code:
pip install pandas pip install sklearn
First step we will take is create the above dataset as a data frame in Python (only with columns containing numerical values that we will use):
import pandas as pd data = {'Sleeve': [1, 2, 3], 'Quality': [4, 4, 2]} df = pd.DataFrame (data, columns = ['Sleeve','Quality']) print(df)
We should get:
Sleeve Quality 0 1 4 1 2 4 2 3 2
Next, using the cosine_similarity() method from sklearn library we can compute the cosine similarity between each element in the above dataframe:
from sklearn.metrics.pairwise import cosine_similarity similarity = cosine_similarity(df) print(similarity)
The output is an array with similarities between each of the entries of the data frame:
[[1. 0.97618706 0.73994007] [0.97618706 1. 0.86824314] [0.73994007 0.86824314 1. ]]
For a better understanding, the above array can be displayed as:
$$
\begin{matrix}
& \text{A} & \text{B} & \text{C} \\
\text{A} & 1 & 0.98 & 0.74 \\
\text{B} & 0.98 & 1 & 0.87 \\
\text{C} & 0.74 & 0.87 & 1 \\
\end{matrix}
$$
Note that the result of the calculations is identical to the manual calculation in the theory section. Of course the data here simple and only two-dimensional, hence the high results. But the same methodology can be extended to much more complicated datasets.
Conclusion
In this article we discussed cosine similarity with examples of its application to product matching in Python.
A lot of the above materials is the foundation of complex recommendation engines and predictive algorithms.
I also encourage you to check out my other posts on Machine Learning.
Feel free to leave comments below if you have any questions or have suggestions for some edits.
The post Cosine Similarity Explained using Python appeared first on PyShark.
Want to share your content on python-bloggers? click here. | https://python-bloggers.com/2020/10/cosine-similarity-explained-using-python/ | CC-MAIN-2021-10 | refinedweb | 1,247 | 59.84 |
AS3935 (community library)
Summary
A library to communicate with and control an AS3935 lightning sensor over I2C.
Example Build Testing
Device OS Version:
This table is generated from an automated build. Success only indicates that the code compiled successfully.
Library Read Me
This content is provided by the library maintainer and has not been validated or approved.
Particle AS3935 (I2C)
A library for communicating with (and hopefully making use of) the AMS Franklin Lightning Sensor.
The library was written and tested on a Particle Photon, but should work on an Arduino with little to no modification.
Usage
Reading data off of the sensor is quite simple, and is easy to get started if you can already connect your AS3935 to your processor.
Connecting
I've used the AS3935 breakout board by Embedded Adventures, though others have had success with breakout boards by other suppliers (it's all the same sensor).
Everything required is available on the breakout board, so you only need a way to connect your processor to the breakout board.
On a Particle
Software
The software on a Particle Photon is quite simple, and short enough to include here:
#include <AS3935.h> // Create the AS3935 object globally AS3935::AS3935 sensor(0x00, D2); void setup() { Serial.begin(9600); Serial.println("Starting...."); // Enable I2C and interrupts sensor.begin(); // Calibrate the sensor, and set the value of the tuning capacitor sensor.calibrate(0x08); // Set a noise floor of 0 sensor.setNoiseFloor(0); } void loop() { // If an interrupt triggered and is waiting for us to do something if(sensor.waitingInterrupt()){ switch(sensor.getInterrupt()){ // If there was a lightning strike case AS3935::INT_STRIKE: Serial.println("Lightning"); break; // If the interrupt was triggered by a disturber, we should mask them case AS3935::INT_DISTURBER: Serial.println("Disturber - masking"); sensor.setMaskDisturbers(true); break; // If the interrupt was caused by noise, raise the noise floor case AS3935::INT_NOISE: Serial.println("Noise"); sensor.raiseNoiseFloor(); break; // This should never execute, but we'll put it here because best practices default: break; } } }
Reference
AS3935
AS3935::AS3935 sensor(0x00, D2);
Instantiate an instance of AS3935 to interact with a sensor.
Arguments are the I2C address and the pin where the interrupt pin is connected.
begin
sensor.begin();
Enables I2C and sets up the interrupt routine. This is normally called in your setup() routine.
calibrate
sensor.calibrate(0x08);
Pass one argument (an unsigned integer) representing the value to set the tuning capacitor to.
reset
sensor.reset();
Reset all of the sensor settings as though it was just powered up.
getInterrupt
reason = sensor.getInterrupt();
Returns an unsigned integer representing the reason an interrupt was triggered.
Calling this method resets
waitingInterrupt() to false. After calling, the
value returned is not available again until an interrupt is read.
Returned values can be compared to constants (reference below) to easily determine what caused the interrupt.
getDistance
distance = sensor.getDistance();
Returns an unsigned integer with the estimated distance to the lightning strike
getNoiseFloor
noisefloor = sensor.getNoiseFloor();
Returns an unsigned integer representing the current noise floor.
setNoiseFloor
sensor.setNoiseFloor(2);
Pass one unsigned integer (ranging 0-7) as an argument representing the noise floor to set.
Returns a boolean value indicating success or failure.
raiseNoiseFloor
sensor.raiseNoiseFloor();
Raise the noise floor by one increment. Returns the new noise floor as an unsigned integer ranging from 0-7. If the noise floor is 7 before calling this, nothing will happen and it will return 7.
lowerNoiseFloor
sensor.lowerNoiseFloor();
Lower the noise floor by one increment. Returns the new noise floor as an unsigned integer ranging from 0-7. If the noise floor is 0 before calling this, nothing will happen and it will return 0.
getMinStrikes
minStrikes = sensor.getMinStrikes();
Get the minimum number of strikes that must be sensed before an interrupt is raised. A value of 255 indicates an error.
setMinStrikes
sensor.setMinStrikes(5);
Set the minimum number of detected lightning strikes required to trigger an interrupt. Valid values are 1, 5, 9, or 16. Returns boolean true if the operation is successful.
getIndoors
indoors = sensor.getIndoors();
Determine if the sensor is configured as indoors or not. Returns boolean true if it's configured as indoors.
setIndoors
sensor.setIndoors(true);
Pass boolean true to set the sensor as being indoors, false for outdoors. Returns true if successful.
getMaskDisturbers
distrubersMasked = sensor.getMaskDisturbers();
Returns boolean true if disturbers are masked, false if they aren't.
setMaskDisturbers
sensor.setMaskDisturbers(true);
Pass boolean true to mask disturbers, false to unmask disturbers. Returns boolean true if successful.
getDispLco
dispLCO = sensor.getDispLco();
Returns boolean true if the local oscillator is exposed on the interrupt pin.
setDispLco
sensor.setDispLco(false);
Pass boolean true to expose the local oscillator on the interrupt pin. This should only be used for tuning and troubleshooting with some sort of instrumentation connected to the interrupt pin.
waitingInterrupt
interruptWaiting = sensor.waitingInterrupt();
Returns true if an interrupt is waiting to be read. It can be reset to false
only by calling
getInterrupt().
Constants
AS3935::INT_STRIKE: The value returned when the sensor detects a lightning strike.
AS3935::INT_DISTURBER: The value returned when the sensor detects a disturber.
AS3935::INT_NOISE: The value returned when the sensor detects noise.
Contributing
Feel free to send pull requests, or file bugs if you discover any. There isn't any automated testing yet, but there hopefully will be soon.
Browse Library Files | https://docs.particle.io/reference/device-os/libraries/a/AS3935/ | CC-MAIN-2022-27 | refinedweb | 884 | 50.94 |
- 24 Jan,.
- 18 Jan, 2014 1 commit
So far, only storage, initialization, repr() and buffer protocol is implemented - alredy suitable for passing binary data around.
- 17 Jan, 2014 1 commit
- 15 Jan, 2014 3 commits
- 14 Jan, 2014 2 commits
- 13 Jan, 2014 1 commit
- 09 Jan, 2014 1 commit
Creating of classes (types) and instances is much more like CPython now. You can use "type('name', (), {...})" to create classes.
- 08 Jan, 2014 3 commits
Use make V=1e make V=1 or set BUILD_VERBOSE in your environment to increase build verbosity. This should fix issue #117
These can be used for any object which implements stream protocol (mp_stream_p_t).
- 07 Jan, 2014 2 commits
- 04 Jan, 2014 4 commits
Now much more inline with how CPython does types.
With MICROPY_EMIT_X64 and MICROPY_EMIT_THUMB disabled, the respective emitters and assemblers will not be included in the code. This can significantly reduce binary size for unix version.
So far, only start and stop integer indexes are supported. Step is not supported, as well as objects of arbitrary types.
- 03 Jan, 2014 2 commits
mpconfig.h will automatically pull mpconfigport.h.
import works for simple cases. Still work to do on finding the right script, and setting globals/locals correctly when running an imported function.
- 02 Jan, 2014 2 commits
termcap is not needed on Linux. Need to work out how to automatically configure the Makefile...
- 01 Jan, 2014 2 commits
- Edd Barrett authored
E.g.: /usr/lib/libreadline.so.4.0: undefined reference to `tgetnum' /usr/lib/libreadline.so.4.0: undefined reference to `tgoto' /usr/lib/libreadline.so.4.0: undefined reference to `tgetflag' /usr/lib/libreadline.so.4.0: undefined reference to `tputs' /usr/lib/libreadline.so.4.0: undefined reference to `tgetent' /usr/lib/libreadline.so.4.0: undefined reference to `tgetstr' Tested on linux too, works.
Readline is GPL, so linking with it casts the binary GPL.
- 30 Dec, 2013 4 commits
- 29 Dec, 2013 2 commits
- 21 Dec, 2013 1 commit).
- 20 Dec, 2013 1 commit
- 17 Dec, 2013 1 commit
- 17 Nov, 2013 1 commit
- 02 Nov, 2013 1 commit
- 22 Oct, 2013 1 commit | https://gitrepos.estec.esa.int/taste/uPython-mirror/-/commits/fcd4ae827171717ea501bf833a6b6abd70edc5a3/py/py.mk | CC-MAIN-2022-27 | refinedweb | 358 | 50.94 |
Wikiversity:FAQ/Categorization
Contents
- 1 Executive summary
- 2 Putting an item in a category
- 3 Category page
- 4 Sort order
- 5 Using templates to populate categories
- 6 Comparison with "What links here"
- 7 Category considerations
- 8 See also
A category is a software feature of MediaWiki. Categories provide automatic indexes that are useful as tables of contents. Together with links and templates they help organize the many pages at Wikiversity. Note: many websites use "tags" to help participants categorize website pages. Wikiversity categories can be thought of as "tags". If you create a new page, "tag" it with a category.
Executive summary[edit]
To help structure the contents of Wikiversity there is a system for grouping pages into categories. For example, this page belongs to Category:Help. When a page belongs to one or more categories, this information appears at the bottom of the page (or in the upper-right corner, depending on the skin being used). Capitalisation of category names should follow naming conventions for word casing.
To add a page to a category:[edit]
- First check the full list of existing categories to find one that matches your requirements. You can click on the category name to see which pages are already in that category. You should be able to find and existing category to use, there are over 1000 (use the next link to page through them).
- Go to the page you want to add the category to and click the "edit this page" tab to get the edit page.
- Add [[Category:category name]] at the bottom of the page. Just substitute category name for the name of your chosen category.
- If there are no suitable categories you can create a new category by substituting category name with the name of your new category. This will add the page to this new category. It will also create a "home page" for this new category.
To be specific, in order to add an article called "Albert Einstein" to the category "People", you would click the "edit this page" tab on the article page "Albert Einstein" to enter the edit page and then add [[Category:People]]. Exactly where doesn't matter, but the Wikipedia policy, for example, is to put it after the article text, but before any interlanguage links.
Category pages[edit]
Each category has a "home page" that contains editable text and an automatically generated, alphabetical list of links to all pages in that category (in fact ASCII order, see Help:Special page). These category pages are in the Category namespace. New categories can also be created and edited in the same way as any other regular page.
Uncategorized content[edit]
If you find a page without a category and can't think of a suitable category please use [[Category:Uncategorized]]. This will help other people to find the page and provide it with a suitable category.
Putting an item in a category[edit]).
Category page[edit].
- list of images with thumbnails (how many there are is not counted); the first 20 characters of the image name are shown, with an ellipsis).
To create a category page, you must add a colon in front of the Category tag when you set up the page-creation link, to prevent the software from thinking you merely want to add the page you are working from to the category:
[[:Category:Category name]]
Placing the above text on working page will create the link you can use to edit your category page.).
Sort order[edit]. The blank space within a page name is treated as an underscore, and therefore comes after the capitals, and before the lower case letters. However, a "blank space" after the name comes before any character. Thus we have the order PC, PCX, PC Bruno, PCjr.
Sort key[edit] (*).
Sort key of images[edit][edit][edit]).
Using templates to populate categories[edit]
If a template contains the code indicating that it is in a category, this does not only put[edit][edit]
To categorize templates themselves, without the pages that call them, one can use the <noinclude> tag, for example
<noinclude>[[Category:category name]]</noinclude>
Alternatively one can use e.g. {{#ifeq:{{FULLPAGENAME}}|Template:Editthispage|[[Category:category name|{{PAGENAME}}]]|}}
Excluding templates from categories[edit]
Use:
<includeonly>
to keep a template from showing up in a category. Text between
<includeonly>
and
</includeonly>
will be processed and displayed only when the page is being included. The obvious application is:
- Adding all pages containing a given template to a category
Note that the usual update problems apply -- if you change the categories inside a template, the categories of the referring pages won't be updated until those pages are edited. "Related Changes" to a category[edit]
For the "What links here" feature, only the links in the editable part of the page count, not the links to the pages in.
Dynamic page list[edit]
See: Help.
Linking to a category[edit]
If you want to link to a category without the current page being added to it, you should use the link form [[:Category:foobar]] (where foobar is the category name). Note the extra : before Category.
Alternatives for overviews[edit], then via {{category redirect}} template, which creates a soft redirect to the desired category. When this template is used, the Category redirects category will be added. Please note that after all pages were moved from "old" to the "new" category, this should be the only category, where the "old-one" is listed. | https://en.wikiversity.org/wiki/Help:Categories | CC-MAIN-2019-43 | refinedweb | 912 | 51.48 |
c/language/struct
From cppreference.com
< c | language
Revision as of 21:04, 26 February 2013 by Smilingrob (Talk | contribs)
Compound types are types that can hold multiple data members.
Syntax
Explanation
Keywords
Example
Run this code
#include <stdio.h> struct car { char *make; char *model; int year; }; int main() { /* external definition */ struct car c; c.make = "Nash"; c.model = "48 Sports Touring Car"; c.year = 1923; printf("%d %s %s\n", c.year, c.make, c.model); /* internal definition */ struct spaceship { char *make; char *model; char *year; } s; s.make = "Incom Corporation"; s.model = "T-65 X-wing starfighter"; s.year = "128 ABY"; printf("%s %s %s\n", s.year, s.make, s.model); return 0; }
Output:
1923 Nash 48 Sports Touring Car 128 ABY Incom Corporation T-65 X-wing starfighter | http://en.cppreference.com/mwiki/index.php?title=c/language/struct&oldid=46340 | CC-MAIN-2014-42 | refinedweb | 133 | 70.8 |
Create a custom dictionary (SharePoint Server 2010)
Applies to: SharePoint Server 2010
Topic Last Modified: 2015-07-06
Summary: Learn about word breakers, normalizations and thesaurus files, supported and unsupported entries, and supported languages.
A custom dictionary is a file that an administrator creates to specify tokens that the word breaker of a particular language should treat as indivisible at index time and at query time. Custom dictionary files are not provided with the product. You must create a separate custom dictionary for each language for which you want to modify the behavior of a word breaker.
In this article:
Reasons to use a custom dictionary
Rules for creating a custom dictionary
Create a custom dictionary
Copy the custom dictionary to each application server
Stop and restart the SharePoint Server Search 14 service
-
-
To know whether you must have a custom dictionary and what entries it should contain, you must understand the behavior of word breakers. The indexing system uses word breakers to break tokens when it indexes crawled content, and the query processor uses word breakers in queries. In each case, if a custom dictionary exists that supports the language and dialect of the word breaker that is being used, the search system checks for the word in the custom dictionary before it determines whether to use a word breaker for that word. If the word does not exist in the custom dictionary, the word breaker performs its usual actions, which might result in breaking a token into multiple tokens. If the token exists in the custom dictionary, the word breaker does not perform any actions on that token. The following two examples describe typical word breaker behavior and how an entry in the custom dictionary can affect that behavior.
A word breaker might break the token “IT&T” immediately before and after the ampersand (&), resulting in the three tokens “IT”, “&”, and “T”. However, if the token “IT&T” is in the custom dictionary of the same language as the word breaker that is being used, the word breaker does not break that token (at crawl time or query time). If “IT&T” is in the custom dictionary, and if a document does not contain "IT" or "T" but does contain "IT&T", a query that contains "IT" or "T" but not "IT&T" does not return that document in the results set.
Terms like Chemical Abstracts Service (CAS) registry numbers can be affected by word breakers. For example, word breakers typically split numbers that appear before or after a hyphen or other special character from the rest of the number. For example, the CAS registry number for oxygen is 7782-44-7. After word-breaker processing, this CAS registry number is broken into three parts: the numbers 7782, 44, and 7. Adding the CAS registry numbers that appear in a corpus to a custom dictionary directs the search system to index each number without breaking it into parts.
Named-entity normalizations, such as date normalizations, that are ordinarily applied by word breakers are not applied to terms that are in custom dictionaries. Instead, all terms that are in custom dictionaries are treated as a match. This is especially important if you have words or numbers in a thesaurus file. For example, if the CAS registry number 7782-44-7 is part of an expansion set in the thesaurus and the word breaker breaks that number at the hyphens into three separate numbers, the expansion set of which that number is a part might not work as expected. In this case, adding the CAS registry number 7782-44-7 to the custom dictionary of the appropriate language resolves the problem. For information about how to use thesaurus files, see Manage thesaurus files (SharePoint Server 2010).
A custom dictionary is a Unicode-formatted file. Each entry must be on a separate line, separated by a carriage return (CR) and line feed (LF). When you add entries to a custom dictionary, consider the following rules to avoid unexpected results:
Entries are not case-sensitive.
The pipe character (|) cannot be used.
White space cannot be used.
The number sign character (#) cannot be used at the beginning of an entry but it can be used within or at the end of an entry.
Except for the pipe, number sign, and white-space characters previously mentioned, any alphanumeric characters, punctuation, symbols, and breaking characters are valid.
The maximum length of an entry is 128 (Unicode) characters.
The following table shows examples of supported and unsupported entries.
Table 1 – Examples of supported and unsupported entries for custom dictionary files
The maximum limit to the number of entries in a custom dictionary is 10,000. There are no settings available to change this limit. However, we recommend that the total file size of a custom dictionary file does not exceed 2 gigabytes (GB). In practice, we suggest that you limit the number of entries to a few thousand.
Use the following procedure to create a custom dictionary.To create a custom dictionary
Verify that the user account that is performing this procedure is a member of the Administrators group on the local computer.
Log on to a crawl server.
Open a new file in a text editor.
Type the words that you want in the custom dictionary according to the rules stated in Rules for creating a custom dictionary earlier in this article.
On the File menu, click Save As.
In the Save as type list, select All Files.
In the Encoding list, select Unicode.
In the File name box, type the file name in the following format: CustomNNNN.lex, where “Custom” is a literal string, NNNN is the four-digit hexadecimal code of the language for which you are creating the custom dictionary, and lex is the file name extension. For a list of valid file names for supported languages and dialects, see Supported languages later in this article.
In the Save in list, browse to the folder that contains the word breakers. By default, this folder is %ProgramFiles%\Microsoft Office Servers\14.0\Bin.
Click Save.
If there are no other crawl servers or query servers in the farm, go to Stop and restart the SharePoint Server Search 14 service. Otherwise, go to the next procedure, “Copy the custom dictionary to each application server in the farm”.
There must be a copy of the custom dictionary on each application server in the farm.To copy the custom dictionary to each application
Verify that the user account that is performing this procedure is a member of the Administrators group on each application server (that is, each crawl server or query server) in the farm.
On each application server in the farm, copy the new custom dictionary file to the folder that contains the word breakers. By default, this folder is %ProgramFiles%\Microsoft Office Servers\14.0\Bin.
You must restart the SharePoint Server Search 14 service on each application server in the farm.To stop and restart the SharePoint Server Search 14 service on each application server
Verify that the user account that is performing this procedure is a member of the Administrators group on the local computer.
On the Start menu, point to All Programs, point to Administrative Tools, and then click Services.
Right click the SharePoint Server Search 14 service and then click Properties. The Properties dialog box appears.
Click Stop. After the service stops, click Start.
Ensure that the Startup type is not set to Disabled.
Repeat this procedure for each application server (that is, each crawl server and each query server) in the farm.
To apply the custom dictionary to the content index, you must perform a full crawl of the content that contains the tokens that you added to the custom dictionary. For information about performing a full crawl, see Manage crawling (SharePoint Server 2010).
The following table indicates the languages and dialects for which SharePoint Server 2010 supports custom dictionaries. You cannot create a custom dictionary for the language-neutral word breaker. The table includes the language code identifier (LCID) and language hexadecimal code for each supported language and dialect. The first two numbers in the hexadecimal code represent the dialect and the last two numbers represent the language. For languages that do not have separate word breakers for separate dialects, the first two numbers in the language hexadecimal code are always zeros.
Table 2 - Supported languages | https://technet.microsoft.com/en-us/library/cc263242(d=printer).aspx | CC-MAIN-2015-35 | refinedweb | 1,398 | 52.19 |
Frozen Mexico Mixed Vegetable with Carrot Corn Onion etc.
US $700-900 / Metric Ton
12 Metric Tons (Min. Order)
custom frozen mixed vegetables
US $800-1300 / Ton
10 Tons (Min. Order)
Frozen mixed vegetables
1 Ton (Min. Order)
Frozen oriental mixed vegetables
US $750-850 / Ton
10 Tons (Min. Order)
Frozen mixed vegetables
US $400-1000 / Metric Ton
12 Metric Tons (Min. Order)
finished product import china frozen broccoli vegetable of first grade
US $700-1600 / Metric Ton
5 Metric Tons (Min. Order)
Frozen Three Variety Mixed Vegetables Import Green Pea
US $450-1200 / Ton
1 Ton (Min. Order)
Import IQF Frozen Mixed Vegetables For Sale
US $800-1000 / Metric Ton
12 Metric Tons (Min. Order)
Frozen corn vegetable importers
US $11-15 / Carton
1 Forty-Foot Container (Min. Order)
frozen vegetables
1 Metric Ton (Min. Order)
Chinese Frozen mixed vegetable 500g Factory
US $750-850 / Ton
1 Ton (Min. Order)
SWEET POTATO, FROZEN SWEET POTATO, IQF FROZEN SWEET POTATO - HIGH QUALITY FROZENZEN VEGETABLES
US $1-2 / Kilogram
1 Kilogram (Min. Order)
iqf importers of frozen fruit and vegetable
US $800-1300 / Ton
10 Tons (Min. Order)
canned mixed vegetables brands
US $15-20 / Carton
1 Twenty-Foot Container (Min. Order)
Importers of frozen fruit and vegetable, Frozen Broccoli Price
US $500-1200 / Metric Ton
12 Metric Tons (Min. Order)
Fresh frozen Burdock stripes frozen vegetable
10 Metric Tons (Min. Order)
Best price new soybean and frozen vegetables for sale
20 Metric Tons (Min. Order)
frozen vegetables and fruits sweet corn
US $740-960 / Ton
10 Tons (Min. Order)
frozen mix vegetables
US $1-2 / Ton
1 Ton (Min. Order)
Frozen mixed vegetables
US $600-700 / Metric Ton
China Importers of frozen fruit and vegetable
US $800-980 / Ton
1 Ton (Min. Order)
chinese frozen vegetables
US $500-1000 / Metric Ton
3 Metric Tons (Min. Order)
Frozen Mixed Vegetables
US $1-2 / Metric Ton
22 Metric Tons (Min. Order)
Frozen bulk carrot for sale
US $300-500 / Metric Ton
15 Metric Tons (Min. Order)
Frozen mix vegetables IQF fruits
US $900-1200 / Metric Ton
12 Metric Tons (Min. Order)
IQF Frozen Pumpkin Dices
1 Metric Ton (Min. Order)
Fresh vegetables importer fresh red onion export to dubai good seller
US $600.0-700.0 / Tons
5 Tons (Min. Order)
Lower Price Importers of frozen fruit and vegetable Frozen IQF Broccoli
US $500-800 / Ton
5 Tons (Min. Order)
wholesale frozen import fresh vegetables IQF manufacturer
US $700-1250 / Ton
1 Ton (Min. Order)
Import China Healthy Iqf Organic Frozen Mixed Vegetable
US $450-1200 / Ton
1 Ton (Min. Order)
Frozen vegetable
1 Metric Ton (Min. Order)
import china best food premium quality fresh fruit and vegetables
US $800-1500 / Metric Ton
5 Metric Tons (Min. Order)
frozen okra cut in frozen vegetable
US $800-1000 / Metric Ton
18 Metric Tons (Min. Order)
FROZEN PURPLE YAM, AMAZING TASTE, BEST VEGETABLE FOR HUMAN, BEST PRICE FOR NOW
US $1-3 / Kilogram
10000 Kilograms (Min. Order)
canne food kinds of cutting vegetables with high quality
US $1000-1100 / Metric Ton
1 Twenty-Foot Container (Min. Order)
Frozen okra cut&sliced and iqf vegetable
24 Metric Tons (Min. Order)
egyption Mixed Vegetables
US $0.1-1 / Ton
1 Ton (Min. Order)
- About product and suppliers:
Alibaba.com offers 718 import frozen vegetable products. such as free samples. There are 822 import frozen vegetable suppliers, mainly located in Asia. The top supplying countries are China (Mainland), Egypt, and Vietnam, which supply 48%, 38%, and 9% of import frozen vegetable respectively. Import frozen vegetable products are most popular in Western Europe, Southern Europe, and Eastern Europe. You can ensure product safety by selecting from certified suppliers, including 107 with ISO9001, 60 with Other, and 30 with BRC certification.
Buying Request Hub
Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE
Do you want to show import frozen vegetable or other products of your own company? Display your Products FREE now! | http://www.alibaba.com/showroom/import-frozen-vegetable.html | CC-MAIN-2018-09 | refinedweb | 661 | 75.81 |
On Lisp -> Clojure (chapter 2)
Inspired by Stuart Holloway‘s excellent series porting Practical Common Lisp to Clojure and Ola Bini‘s bad-ass port of Paradigm’s of Artificial Intelligence to Ruby. I have started my own quest to port Paul Graham‘s On Lisp to Clojure.
Posts in this series: ch. 2, ch. 2 redux, ch. 3, ch. 4, ch. 5
I have made it through chapter 2 and am ready to start the coding for chapter 3. I will post when the code is “complete”, but my progress can be followed on Github.
; pg. 10 (defn dbl [x] (* x 2)) (dbl 39) (dbl 17872687642786723868743216782) ; pg. 11 (= dbl (first (list dbl))) ; pg. 12 (fn [x] (* x 2)) ((fn [x] (* x 2)) 20) ; pg. 13 (def dbl (fn [x] (* x 2))) ; pg. 14 (map (fn [x] (+ x 10)) '(1 2 3)) (map + '(1 2 3) '(10 100 1000)) ;; Clojure uses the Java's Comparator on the sort function ;; so there is no need to supply the < function. In fact, ;; if you did, an exception would be thrown. (l@@k why?) (sort '(1 4 2 5 6 7 3)) ; pg. 15 ;; Because Clojure is a Lisp-1, funcall is not needed. Instead, ;; we just call directly as (f) (defn remove-if [f lst] (if (nil? lst) nil (if (f (first lst)) (remove-if f (rest lst)) (cons (first lst) (remove-if f (rest lst)))))) (remove-if even? '(1 2 3 4 5)) (remove-if nil? '(1 2 nil 3 nil)) (remove-if (fn [x] (if (> x 5) nil x)) '(3 4 5 6 7)) ; pg. 16 (let [y 7] (defn scope-test [x] (list x y))) (let [y 5] (scope-test 3)) ;; no shocker here ; pg. 18 (defn list+ [lst n] (map (fn [x] (+ x n)) lst)) (list+ '(1 2 3) 10) ;; ;; This is surprisingly difficult given that Clojure does not allow the ;; modification of local variables defined in a let. Only vars (globals) ;; and class fields can be modified with the (set!) function. ;; Therefore, I had to hack it so that the closed counter is an array ;; of one integer element. Therefore, it is the array that is modified ;; and not the count. The point being, if you want to create closures ;; for functions modifying local state, then you have to use a mutable object ;; to do so ;; (let [counter (to-array [0])] (defn new-id [] (aset counter 0 (inc (aget counter 0)))) (defn reset-id [] (aset counter 0 0))) ;; On the other hand, this is extremely easy (defn make-adder [n] (fn [x] (+ x n))) (def add10 (make-adder 10)) (def add42 (make-adder 42)) (add10 1) (add42 100) (add42 (add10 1)) ; pg. 19 ;; Again we run into the immutable local binding feature. Since I already ;; solved this problem before, getting it to work for this case was ;; a piece of cake. (let [n (to-array [0])] (defn make-adderb [m] (aset n 0 m) (fn [x & change] (if change (aset n 0 x) (+ (aget n 0) x))))) (def addx (make-adderb 1)) (addx 3) (addx 100 true) (addx 3) ; pg. 22 ;; Clojure handles this very nicely. (defn count-instances [obj lsts] (defn instances-in [lst] (if (list? lst) (+ (if (= (first lst) obj) 1 0) (instances-in (rest lst))) 0)) (map instances-in lsts)) (count-instances 'a '((a b c) (d a r p a) (d a r) (a a))) ;; Tail-recursive find-if ;; returns the first instance satisfying (f) (defn our-find-if [f lst] (if (f (first lst)) (first lst) (our-find-if f (rest lst)))) (our-find-if even? '(1 3 5 7 9 11 14)) ; pg. 23 ;; Tail-recursive our-length (defn our-length [lst] (defn rec [lst acc] (if (nil? lst) acc (rec (rest lst) (inc acc)))) (rec lst 0)) (our-length (range 100)) (our-length (range 0 100 2)) ; pg. 24 (defn triangle [n] (defn tri [c n] (if (zero? n) c (tri (+ n c) (- n 1)))) (tri 0 n)) (triangle 10) (triangle 30000) ;; one more zero blew my stack
-m
7 Comments, Comment or Ping
Chouser
This is cool. “On Lisp” is what convinced me Lisp was actually worth understanding, and without that I would never have given Clojure a chance.
A couple notes on your code…
remove-if uses “regular” recursion, but because the JVM does not optimize away tail calls the way Common Lisp (usually) does, this could fail for very long lists. However, if you just replace “cons” with “lazy-cons” and the other recursive call with “recur”, you’ll have a beautiful lazy sequence function that will never blow the call stack.
I’m sure you’ve noticed that Graham is taking some pains to make tail-recusive versions of his functions. In general to take advantage of this you’ll want to use “recur” instead of the tail call.
It’s more idiomatic (and more succinct!) to say (if (seq lst) …) than (if (nil? lst) nil …) Also better to use (let [tri (fn [c n] …)] (tri 0 n)) than to have a defn nested inside another defn, since the inner defn is still defining a namespace-global name, even though it’s nested.
Finally I noticed your clever use of Java arrays to get mutability. This is fine, in that you’ll often find yourself in Clojure working with mutable Java objects, and you just have to deal with it. However, Clojure also provides some very nice multi-threading features if you use refs of immutables instead of mutable objects.
In your case, you could replace (let [counter (to-array [0])] …) with (def counter (ref 0)). Then use @counter instead of aget, and use (dosync (set-ref …)) instead of aset. For more details on refs, see
Anyway, don’t let me discourage you — I’m sure working through On Lisp will be a great way for you to learn Clojure, and will likely help others too. I’m very interested to see the DSLs of the later chapters in Clojure. Keep it up!
–Chouser
Sep 30th, 2008
fogus
Excellent post! Far from being discouraged, I am enthusiastic by your recommendations. I am far less interested in being correct than in doing it correctly (if that makes any sense at all).
I was a little discouraged that my implementation of remove-if was ‘dangerous’ to the stack, and I delighted in your pointers in shoring that up. Thank you. It’s clear that I need to study
recurmore closely.
I am sure that, at least in the early stages, I will make an abomination of the Clojure idioms, but I hope that over time they will become a part of my lexicon.
My initial versions of the the closed counter and make-adderb attempted to use refs, but I couldn’t quite get it to work. I will use your advice and attempt to re-implement them, but will most likely keep the old hack laying around as a counter-example/don’t-do-this case.
Again, thank you for the tips for this Clojure n00b. -m
Sep 30th, 2008
Dan
The reason (sort < ‘(1 2 3 4 5)) fails is because the < function returns a boolean, not 0, -1 or 1. So if you create a function that follows that requirement:
then you can use it with sort:
user=> (sort compare-count ‘([2 2] [5 5 5 5 5] [4 4 4 4] [1] [3 3 3])) ([1] [2 2] [3 3 3] [4 4 4 4] [5 5 5 5 5])
P.S. Hope my formatting comes out okay.
Oct 1st, 2008
fogus
Dan, Thanks for the explanation. I had actually forgotten to go back and L@@k the reason, you saved me some work. ;)
-m
Oct 1st, 2008
Gavin Sinclair
Nice work. Looking forward to reading the other installments.
Jan 13th, 2009
Sidhant Godiwala
Thanks for the link, I’ve just posted my version of the same chapter and I checked with yours to see what the differences are.
Its really nice to have someone else to compare with and learn from :)
Nov 18th, 2009
Harish
The above remove-if give error below
user=> (remove-if even? ‘(1 2 3 4 5)) IllegalArgumentException Argument must be an integer: clojure.core/even? (core.clj:1322)
Oct 28th, 2012
Reply to “On Lisp -> Clojure (chapter 2)” | http://blog.fogus.me/2008/09/26/on-lisp-clojure-chapter-2/ | CC-MAIN-2018-43 | refinedweb | 1,380 | 78.28 |
The Changelog – Episode #73
tmux, dotfiles, and Text Mode
with Brian Hogan and Josh Clayton
Featuring
Wynn sat down with Brian Hogan and Josh Clayton to talk about tmux, dotfiles, and the joys of text mode.
Featuring
Notes & Links
- Brian Hogan speaker, trainer, and author of _Tmux: Productive, Mouse Free development, out now from PragProg.
- Josh Clayton is a developer at Thoughtbot.
- Factory Girl - fixture replacement for Ruby.
- tmux is a terminal multiplexer similar to GNU screen.
- tmuxinator helps you manage tmux sessions.
- taskpaper.vim - Vim interface for Taskpaper.
- Josh’s dotfiles are extensive.
- A patch to reattach to user namespace in tmux.
- Palette lets you write Vim color schemes with Ruby
- Evergreen - Run Jasmine JavaScript unit tests, integrate them into Ruby applications
- The latest iTerm2 ships with tmux integration.
- tslime.vim is a simple vim script to send portion of text from a vim buffer to a running tmux session.
- vim-turbux - Ruby testing with tmux.
- Justin Smestad turned Wynn onto tmux for pair programming.
- Derick Bailey from Watch Me Code.
- pair.io gives you a one-button, collaboration-friendly dev environment for your GitHub repo.
- Jesse Dearing is the unnamed “DevOps guy” at Pure Charity.
- Thoughtbot has a company-wide dotfiles repo.
- Josh rolls his own Vim setup with Tim Pope’s pathogen.
- Brian uses TTYtter, is a terminal-based Twitter client, Wynn uses Earthquake.
- Josh likes irrsi for IRC.
- Brian likes Alpine over mutt for mail.
- Search GitHub for “tmux.conf.”
- Zach Holman says dotfiles are meant to be forked.
- Zach’s own dotfiles.
- Yan Pritzker’s dotfiles are opinionated.
- Josh says that if you don’t think your dotfiles are the best out there, you’re doing it wrong. (29:55)
- Joe Ferris at Thoughtbot inspired Josh’s dotfiles.
- Brian and Josh say Janus and oh-my-zsh are great to get started, but you need to understand your dotfiles.
- Wynn uses this shell function to list colors to put into his tmux config.
- Dotshare is web site to share dotfile configs plus screenshots.
- Pianobar is text-based command line interface for Pandora.
- Wynn uses shell.fm for Last.fm.
- Be sure and check out the Tmux Crash Course.
NEW: Humans Present: tmux a Thoughtbot Workshop. | https://changelog.com/podcast/73 | CC-MAIN-2020-50 | refinedweb | 370 | 69.89 |
import "gioui.org/widget"
Package widget implements state tracking and event handling of common user interface controls. To draw widgets, use a theme packages such as package gioui.org/widget/material.
buffer.go button.go checkbox.go doc.go editor.go enum.go label.go
A ChangeEvent is generated for every user change to the text.
Click represents a historic click.
type Editor struct { Alignment text.Alignment // SingleLine force the text to stay on a single line. // SingleLine also sets the scrolling direction to // horizontal. SingleLine bool // Submit enabled translation of carriage return keys to SubmitEvents. // If not enabled, carriage returns are inserted as newlines in the text. Submit bool // contains filtered or unexported fields }
Editor implements an editable and scrollable text area.
CaretCoords returns the x & y coordinates of the caret, relative to the editor itself.
CaretPos returns the line & column numbers of the caret.
Delete runes from the caret position. The sign of runes specifies the direction to delete: positive is forward, negative is backward.
func (e *Editor) Events(gtx *layout.Context) []EditorEvent
Events returns available editor events.
Focus requests the input focus for the Editor.
Focused returns whether the editor is focused or not.
Insert inserts text at the caret, moving the caret forward.
Layout lays out the editor.
Len is the length of the editor contents.
Move the caret: positive distance moves forward, negative distance moves backward.
NumLines returns the number of lines in the editor.
SetText replaces the contents of the editor.
Text returns the contents of the editor.
Layout adds the event handler for key.
Value processes events and returns the last selected value, or the empty string.
type Label struct { // Alignment specify the text alignment. Alignment text.Alignment // MaxLines limits the number of lines. Zero means no limit. MaxLines int }
Label is a widget for laying out and drawing text.
func (l Label) Layout(gtx *layout.Context, s text.Shaper, font text.Font, size unit.Value, txt string)
A SubmitEvent is generated when Submit is set and a carriage return key is pressed.
Package widget imports 17 packages (graph) and is imported by 9 packages. Updated 2020-03-28. Refresh now. Tools for package owners. | https://godoc.org/gioui.org/widget | CC-MAIN-2020-16 | refinedweb | 362 | 53.98 |
hey guys. i have a linked list set out like this
struct node { string bookTitle; string *authors; int nAuthors; node *next; }; node *start_ptr;
and i need to sort the linked list based on bookTitle alphabetically. how would i go about doing this?
Well....first you need to decide on a sorting algorithm. There are TONS of different ways to sort, I saw a post about a bubble sort, which is a very easy-to-write but inefficient sort, you could do some research about insertion sorts, quick sorts, merge sorts...Like I said, there are tons of sorts. You need to weigh the advantages and disadvantages of each to meet your needs (i.e. speed and efficiency in terms of memory versus ease of writing). I don't know if you have discussed order notation in your CS studies yet...if not then maybe its not important, just choose an easy sorting algorithm...
Secondly, you will need a comparison function...I'm not familiar with standard string functions for C++ (I tended to just implement them as I needed in C), but I would imagine there is a strcmp function somewhere...(you can likely use this to compare the authors)...in any case, strcmp is not difficult to implement.
It might be a useful exercise to think about the type of data you will be comparing...in this case its just an authors name, so you probably don't need to worry too much...(I just mentioned this because the strcmp function, depending on how it is implemented, takes time, which is sometimes neglected, and in certain cases, it would be a useful exercise to take this into account when choosing and optimizing the sort..I'm sorry I don't know if I just made any sense...getting tired...)
Then I would suggest that the actual sorting be done by manipulating pointers to save space...just make sure you don't have any memory leaks!!!
Sorry if that is kind of vague...but I'm starting to get tired, no time to write code examples...should lead you in a direction to start.
The nodes might be easier to sort if you put the data in another structure
struct data { string bookTitle; vector<string> authors; }; struct node { struct data* pData; node *next; }; node *start_ptr
Now when a swap needs to take place all you have to do is swap the data pointers and leave all the node pointers alone.
Also, don't use a pointer to a std::string object. It doesn't save you a thing, and actually causes more trouble than its worth. If you want an array of authors then use a vector. And if you use vectors you don't need
int nAuthors; because the vector class keeps track of that value.
>>we cant use anything from STL
Ok, in that case you can't use std::string objects either.
struct data { char* bookTitle; char **authors; int nAuthors; }; struct node { struct data* pData; node *next; }; node *start_ptr
Ok, so now what's the problem ? I forgot. Oh yea, I remember now -- how to sort a linked list. If you know how to sort a normal array then its not all that hard to sort the linked list. With the new node class I posted all you have to swap is pData pointers.
Here is a working example of one way to sort a linked list. It adds 5 nodes to the linked list then sorts the list based on book title.
#include <iostream> #include <string> using namespace std; struct data { string bookTitle; string *authors; int nAuthors; data() { authors = 0; nAuthors = 0; } ~data() { if(authors) delete[] authors; } }; struct node { struct data* pData; node *next; node() { next = 0; pData = 0; } ~node() { if(pData) delete[] pData; } }; node *head = NULL; node* AddHead(data* pData) { node* newNode = new node; newNode->pData = pData; newNode->next = head; head = newNode; return newNode; } void SortList() // This just uses the standard bubble sort. { node* i; node* j; for(i = head; i->next != NULL; i = i->next) { for(j = i->next; j != NULL; j = j->next ) { if( i->pData->bookTitle > j->pData->bookTitle) { data* tmp = i->pData; i->pData = j->pData; j->pData = tmp; } } } } int main() { string names[5] = { "James", "King", "Arnold","Bellairs","Johnson" }; struct data* pData; // add 5 nodes to the linked list for(int i = 0; i < 5; i++) { pData = new data; pData->bookTitle = names[i]; AddHead(pData); } // sort them by book title SortList(); // print them to the console screen node* curr = head; while(curr) { cout << curr->pData->bookTitle << "\n"; curr = curr->next; } // delete the linked list curr = head; while(curr) { node* tmp = curr; curr = curr->next; delete tmp; } head = NULL; }
thanks i'll go through it :-)
what's with the data() and ~data() in the data node? im guessing these are the constructor/destructors for the structure? i didn't know you could do that with structures i thought only classes!
edit: now it is kicking up on the constructor of my Library class highlighting:
Library::Library() { start_ptr = NULL; }
it highlights the { for some reason. it says "new types may not be defined in a return type".
my bad.....at the end of my class i didnt have ;
lol.
edit again: program crashed now when i try to add to the node list.
void Library::add(string title, string authors[], int nAuthors) { node *temp = new node; //store input data into temp node temp->pdata->bookTitle = title; temp->pdata->authors = new string[nAuthors]; //make enough space for all of the authors temp->pdata->nAuthors = nAuthors; for (int i = 0; i<nAuthors; ++i) //store all of the authors { temp->pdata->authors[i] = authors[i]; } temp->next = NULL; if (start_ptr == NULL) //add to the linked list start_ptr = temp; else { node *p = start_ptr; while (p->next != NULL) p = p -> next; p->next = temp; } cout << "\n================ FINISHED STORING INFORMATION ================\n" << endl; }
that's my add function. going through it now
>> i didn't know you could do that with structures i thought only classes!
In c++ structures are almost identical to classes.
>>temp->pdata->bookTitle = title;
You forgot to allocate pdata too.
temp->pdata = new data; ... | https://www.daniweb.com/programming/software-development/threads/123288/sorting-a-linked-list | CC-MAIN-2017-17 | refinedweb | 1,018 | 69.62 |
Airflow
Airflow for hands-off ETL
Almost exactly a year ago, I joined Yahoo, which more recently became Oath.
The team I joined is called the Product Hackers, and we work with large amounts of data. By large amounts I meant, billions of rows of log data.
Our team does both ad-hoc analyses and ongoing machine learning projects. In order to support those efforts, our team had initially written scripts to parse logs and run them with cron to load the data into Redshift on AWS. After a while, it made sense to move to Airflow.
Do you need Airflow?
- How much data do you have? A lot
- Do you use cron jobs for ETL? How many? Too many
- Do you have to re-run scripts manually when they fail? How often? Yes, often enough to be a pain point
- Do you use on-call shifts to help maintain infrastructure? Unfortunately, we did
What exactly is Airflow?
Airflow is an open-source python library. It creates Directed Acyclic Graphs (DAGs) for extracting, transforming, and loading data.
‘Directed’ just means the tasks are supposed to be executed in order (although this is not actually required, tasks don’t even have to be connected to each other and they’ll still run). ‘Acyclic’ means tasks are not connected in a circle (although you can write loops that generate tasks dynamically, you still don’t want circular dependencies).
A DAG in this case is a python object made up of tasks. A typical DAG might contain tasks that do the kinds of things you might do with cron jobs:
get logs –> parse logs –> create table in database –> load log data into new table
Each DAG step is executed by an Operator (also a python object).
Operators use Hooks to connect to external resources (like AWS services).
Task instances refer to attempts to run specific tasks, so they have state that you can view in the dashboard. The state tells you whether that tasks is currently executing, succeeded, failed, skipped, or waiting in the queue.
Some tips if you’re going to use Airflow
Make your jobs idempotent if possible
My team has a couple different types of tables that we load into Redshift. One is the type we call ‘metadata’, which is typically just a simple mapping that doesn’t change very often. For this type of table, when it does change, it’s important to drop the old table and re-create it from scratch. This is easier to manage with separate tasks for each SQL step, so the DAG has the following steps:
get_data –> drop_old_table –> create_new_table –> load_data
This way, if any of the steps fail, we can re-start from there, and it doesn’t matter if the step was partially completed before it failed.
The other kind of table we have is an event table, and those are loaded with fresh data every day. We retain the data for 3 days before we start running out of space on the Redshift cluster. This kind of table doesn’t really need a drop_old_table step, because the table name includes the date (which makes it easier to find the table you want when you’re running queries). However, when we create these tables, we still want to make sure we don’t create duplicates, so in the create step we check to see if the table already exists.
Get AIRFLOW_HOME depending on where you’re running
If you want a really stable build that requires the least amount of hands-on maintenance, do yourself a favor and ‘productionize’ your setup. That means you’ll want to run Airflow in at least 3 places:
- In a virtual environment on your local machine (we use Docker with Ansible)
- In a virtual environment in your continuous integration system (we use Jenkins)
- In a virtual environment on your production host (we use virtualenv with python 3)
Note that Airflow doesn’t make this easy, so I wrote a little helper script to make sure Airflow has the right configuration files and is able to find the DAGs, both of which are dependent on using the correct AIRFLOW_HOME environment variable.
Here’s the TL;DR:
#If AIRFLOW_HOME environment variable doesn’t exist, it defaults: os.getenv('AIRFLOW_HOME', '~/airflow') #It’s really useful to always check where the code is running: homedir = os.getenv('HOME') #If it’s on Jenkins, there’s an environment variable that gives you the path for that: jenkins_path = os.getenv('JENKINS_HOME', None) #In the Jenkinsfile (without Docker), we’re doing this: withEnv(['AIRFLOW_HOME=/br/airflow']) cp -r $PWD/dags $AIRFLOW_HOME/ #If you’re running tests locally, there’s a helper that I stole from inside Airflow’s guts: import airflow.configuration as conf conf.load_test_config() os.environ['TEST_CONFIG_FILE'] = conf.TEST_CONFIG_FILE
Write unit tests for your Operators and your DAGs
I hadn’t seen anyone doing this for Airflow, but I write tests for all my python code, so why should Airflow be any different?
It’s a little unintuitive, because Airflow DAG files are not like regular python files. DAG objects have to be at the top level, so the way I got around this was to grab the dag file and then get each of the task objects as attributes.
I wrote the tests for the Operators so that they could be easily re-used, since most of our DAGs have similar tasks. This also lets us use unit tests to enforce best practices.
class TestPostgresOperators: """ Not meant to be used alone For use within specific dag test file """ @classmethod def setUp(cls, dagfile): cls.dagfile = dagfile def test_droptable(self, taskname='dropTable'): ''' validate fields here check retries number :param taskname: str ''' drop = getattr(self.dagfile, taskname) assert(0 <= drop.retries <= 5) assert(drop.postgres_conn_id=='redshift')
Then these ‘abstract tests’ get instantiated in the test file for a particular DAG, like this:
import advertisers_v2 from test_dag_instantiation import TestDAGInstantiation from conftest import unittest_config from test_postgres_operators import TestPostgresOperators from test_mysql_to_redshift import TestMySQLtoRedshiftOperator mydag = TestDAGInstantiation() mydag.setUp(advertisers_v2,unittest_config=unittest_config) mydag.test_dagname() mydag.test_default_args() postgres_tests = TestPostgresOperators() postgres_tests.setUp(advertisers_v2) postgres_tests.test_droptable() postgres_tests.test_createtable() mysql_to_redshift_tests = TestMySQLtoRedshiftOperator() mysql_to_redshift_tests.setUp(advertisers_v2) mysql_to_redshift_tests.test_importstep()
Doing it this way makes it ridiculously easy to set up tests, and they can still be parameterized however you want, to test customizations as needed.
Some other fun tidbits
If you’re using XCOMs, the docs are a little bit out of date. This took me a while to figure out, so hopefully it helps save someone else the same pain. Note that this is for version 1.8 (not sure if anything is changing with XCOMs in the newer versions).
The xcom values are actually stored in the
context object, so when you go to push them you have to explicitly grab the task instance object to make that work.
Inside the task where you’re pushing:
task_instance = context.get('task_instance') task_instance.xcom_push(name_of_xcom_key, name_of_xcom_value)
And then inside the task where you’re pulling, you can use the jinja macro with the key name:
"{{ task_instance.xcom_pull(task_ids=name_of_task_that_pushed, key=name_of_xcom_key) }}" | https://szeitlin.github.io/posts/airflow/airflow-for-hands-off-etl/ | CC-MAIN-2022-27 | refinedweb | 1,177 | 60.14 |
On Tuesday 16 August 2005 11:52, Amy Griffis wrote: > Hello, > > I've been taking a look at the auditfs code in U2, and I've noticed an > issue with the path-based watching. In U2, the path-based watching > code only keeps tabs on the parent of given user watch, instead of > watching the entire path back to the filesystem root. > > This means that if a path component beyond the user watch's parent > changes, the recreation of the object at the watched path will not be > caught. Any subsequent events on the object at the watched path will > also not be caught. > > For example: > > # auditctl -w /one/two/three/four > # mkdir -p /one/two/three > # :> /one/two/three/four > # echo "hello world" > /one/two/three/four > > <audit records generated> > > # mv /one/two /one/too > # mkdir -p /one/two/three > # :> /one/two/three/four > # echo "hello world" > /one/two/three/four > > <no audit records generated> > > Is this a known limitation? It is known. In a CAPP environment, this sort of trickery will not come up. To do what you want to do, the logic would get extremely complicated and perhaps one day it'll be doable (upstream). Because we're storing the information _in_ the file system we're limited on how much state we can keep. The original way we had planned on doing the logic was frowned upon because it required a persisent (and potentially sizeable) map of a file system depicting where all the watched locations are (and I believe there were some namespace issues as well). This way each component of the watched path in the actual file system could be mapped all the way up to the root directory. -tim > > Amy > > -- > Linux-audit mailing list > Linux-audit redhat com > > > | https://www.redhat.com/archives/linux-audit/2005-August/msg00032.html | CC-MAIN-2015-14 | refinedweb | 295 | 66.17 |
"Didier GARCIN" <dg.recrutement31@...> writes:
>
No, that is actualy one of the stupid daemons that are running around
needing to poke their nose into everything. I think in this case it is
hal looking into every new mountpoint. But could be some other
daemon. I had the same behaviour last year and stooping something made
the lookups go away.
MfG
Goswin
John Muir <john@...> writes:
> On 2010-01-14, at 8:20 AM, Miklos Szeredi wrote:
>>>>>> ]Is it possible to activate a write-cache in the FUSE kernel module, so
>>>>>> that the file system gets large write requests even if the application
>>>>>> works with a small block size?
>>>>
>>>> What is missing to make that possible?
>>>
>>> Not much, actually. One missing piece is allowing the fuse module to
>>> cache mtime updates and flush them to the filesystem when the file is
>>> synced or closed.
>
> Jiffies in the fuse file, added argument to the release/flush?
>
>> Oh, and there are issues with space reservation. Normally write
>> reserves space or returns ENOSPC if there's no free space. But with
>> fuse, for latency issues, we'd really need some sort of
>> pre-reservation so that a request is not necessary for each write.
>
> That is more complicated for sure, and might not be worth it all of the time given the overhead of making that request to the file-system compared with just sending the write immediately (that would depend on the file-system and the maximum size of a write of course).
>
> On the other hand, the result could also be part of fsync/flush/release (if FUSE was updated to use the result of the release). I mean, is anything guaranteed with respect to free space if you aren't using O_DIRECT or O_SYNC until you fsync/flush/release? For my purposes I'd be comfortable with that limitation, although I can see it as a major difference with other file-systems.
>
> John.
The choice could be simple: Let the user decide.
If the user mounts with big writes enabled then writes will succeed
and only later give an error on fsync/flush/release.
If the user needs to know ENOSPC on writes directly then he can mount
with small writes.
Adding space reservation to fuse might be nice but not strictly
neccessary before alowing big writes.
MfG
Goswin
Jan-Benedict Glaw <jbglaw@...> writes:
> On Sun, 2009-12-27 12:59:30 +0100, Stef Bon <stefbon@...> wrote:
>>
>> I've got to mention that I personally do not like constructions like:
>>
>> path_copy=strdup(path);
>> if ( ! path_copy ) { ....}
>>
>> I understand what's happening here, and I've seen it often in C (and
>> even bash) programm's, and it's a check
>> path_copy is defined, but it looks like it a check as if it's boolean
>> (and false) which is not the case.
>>
>> Better would be imho :
>>
>> if ( path_copy != NULL ) { ..... }
>
> It's longer--and it's exactly the same.
>
>> This is correct and has the same effect?
>
> It is exactly the same, probably the compiler will even generate the
> exact same binary code. However, it's longer and by convention, about
> every C coder will use the shorter form.
Or rather
path_copy=strdup(path);
if (path_copy) {
give pretty error
goto clean_up;
}
name = basename(path_copy);
Esspecially when you have multiple such checks the extra indentations
when using '!' become bothersome.
>> the about locking. I've seen program's using this pthread_lock call.
>> This is logical and complicated at the same time for me. Is this
>> necessary in Fuse modules?
>
> Maybe not right now. But if you write really good code, you separate
> functions by the problems they're solving into different source files.
> Thus, this *very* function may not need any locking while being
> attached to your current FUSE development. But what, if this source
> file is copied to another project--which uses threads a lot?!
>
> Better think about it now. If you know that for this fuse project, you
> don't need locking, put it into ifdefs or conditinal variables (in
> both cases, it'll be either not compiled in in the first run, or
> optimized out afterwards.) But keep the /problem/ in mind.
Actualy for that one does not using locking but avoids functions that
aren't thread safe in the first place.
I believe the GNU basename() always returns a pointer inside the
argument and not a static buffer and is thread safe. GNU basename()
also never modifies the argument so no strdup needed. So use
#define _GNU_SOURCE
#include <string.h>
>> But here, when doing in the function
>>
>> path_copy=strdup(path);
>> basename_part = basename(path_copy);
>>
>> The vars path_copy and basename_part are unique for a thread, and not
>> used or mixed in any way between threads, right? But as you write in
>
> Right. As long as they're "automatic" variables, placed on the stack.
>
>> your reply, it's the function basename which may overwrite static
>> memory. So as I understand basename may overwrite things, right? Is
>> this what they call "not multi thread safe"? Is there a list of
>> function calls of some sort which are not multi thread safe?
>
> Exactly! `basename' is not MT-safe. It *may* have some static local
> memory, write to it and return a pointer to it to it's caller.
>
> If now, in a multi-threaded environment, another thread calls basename
> at about the same time, the second invocation may write something
> different to that internal memory--to which the first caller just got
> a pointer returned! First caller might see the new content, without
> knowing that it changed since his call.
>
> This is why you serialize (with locks) calls to non-MT-safe functions,
> save their results away and open the lock again.
>
>> And the name of the lock (basename_dirname_mutex) should be the same
>> of course in the whole multi thread program?
>
> It's not about the /name/. Each source file could have:
>
> static pthread_mutex_t foo;
>
> That means that there are /several/ "foo"s, which share a name, but
> nothing else. (Especially not their locking functionality.) You have
> to use _one_ common lock for each function (or group of function) that
> share _one_ common property. (Think about getpwend/setpwent/endpwent.)
>
> MfG, JBG
It is preferable to not need to lock as locking is a rather slow
operation.
MfG
Goswin
Daniel Garcia Coego wrote:
> Didier GARCIN wrote:
>>
>>
>> ----- Original Message ----- From: "Daniel Garcia Coego"
>> <flamealchemist86@...>
>> To: <fuse-devel@...>
>> Sent: Wednesday, January 20, 2010 12:07 PM
>> Subject: [fuse-devel] Lots of readdir and getattr when calling
>> userspaceprogram
>>
>>
>>> Hi, I'm implementing a network FS and I have getattr and readdir
>>> done. When
>>> I run the program in debug mode I see that fuse calls lots of
>>> readdirs and
>>> getattrs and the mount doesn't appear on desktop or Places (Ubuntu)
>>> until it
>>> finishes. Is this a normal behaviour? It keeps calling getattr for
>>> folders
>>> and files that were already inspected and it gets worse the more
>>> folders and
>>> files are on the remote folder I want to mount. I noticed too that
>>> for some
>>> reason when I enter some folder out of the mountpoint fuse makes
>>> callbacks,
>>> could this be related?
>>>
>>> Sorry if it is a noobish question but I can't figure out what's
>>> happening.
>>>
>>> Best regards.
>>> ------------------------------------------------------------------------------
>>>
>>> Throughout its 18-year history, RSA Conference consistently attracts
>>> the
>>> world's best and brightest in the field, creating opportunities for
>>> Conference
>>> attendees to learn about information security's most important
>>> issues through
>>> interactions with peers, luminaries and emerging and established
>>> companies.
>>>
>>> _______________________________________________
>>> fuse-devel mailing list
>>> fuse-devel@...
>>>
>>
>>
> Hi Didier and thanks for the reply. Still haven't figured it out. My
> guess is that I'm messing up with something that makes VFS to ask over
> and over again for the same thing, otherwise I think it doesn't make
> sense to read the root dir like 20 times one after the other. Could it
> be that more operations are needed and as I only have for the moment
> getattr and readdir VFS goes crazy asking for things?
>
> Regards
>
> Dani
>
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/fuse/mailman/fuse-devel/?viewmonth=201001&viewday=21 | CC-MAIN-2017-17 | refinedweb | 1,375 | 72.56 |
Last Updated on August 28, 2019
Some.!?
Hey Jason,
I am doing a time series forecasting of air pollution pollution. I am planning to use air pollutant data and I am planning to use meteorological factors such as air temperature as well. Can you suggest me the tests I should be performing on the data and the good time series models to use .
Regards
Ankit
Yes, I recommend following this process:
Hey Jason,
Can u please explain the difference between standardisation and normalisation.
Also StandardScaler() is not working in my r environment.
Regards
Priyanka
Yes, right here:.
solve foe:
But if you normalize by this way, you are using information from future, therefore the model will overfit. Let’s say you normalize the columns to train my model with the mean and the std of their content, but a new input data cannot be normalized following the old criteria, and neither the new one (the mean and the std of the last N rows) because of the trend, or the std…
What about using this instead:
Yes, I would recommend estimating the normalization coefficients (min/max) using training data only.
Ok, that makes sense. But even with your training data you’d have the same problem: working with a timeseries dataset, if you normalize/scale using the min/max of whole training dataset, you are taking into account values of future data as well, and in a real prediction you won’t have this information, right?
Moreover, how would you normalize/scale future data? Using the same saved preprocessing model of your training data, or creating a new MinMaxScaler() using the last N rows? What if the new values are slightly different of the training ones?
That’s why I’ve posted the log1p solution, which is the same as log(1+x) and which will thus work on (-1;∞). Or what about this one:
I think would be very interesting that you pointing this out in the post because I’m afraid it can affect dramatically to model’s accuracy…
This was my point. If normalizing you need to select min/max values based on available data and domain knowledge (to guestimate the expected max/min that will ever be seen).
Same idea with estimating the mean/stdev for standardization.
If evaluating a model, these estimates should be drawn from the training data only and/or domain expertise.
I hope that is clearer.
I think the best approach would be the following:
scaler = StandardScaler() # or MinMaxScaler()
scaler_train = scaler.fit(X_train)
X_train = scaler_train.transform(X_train)
scaler_full = scaler.fit(X) # X_train + X_test
X_test = scaler_full.transform(X_test)
Further reading:
Hi Jason, thank you for your post.
I have question.
I have timestamp and system-up-time where up-time is # of hrs system has been up in its lifetime.
Now I have to predict system failure based on the age of the system or system-up-time hrs.
# of failures might grow based on the age of the system or how many hrs its been running.
I have limited training data and the max up-time hrs in the training data is 1,000 hrs and age is 1,200 hrs. But in real time it could go beyond 100,000 hrs and age could go beyond 150,000 hrs.
How do I standardize timestamp and up-time hrs.
Consider looking into survival analysis:
Hi Jason,
Thank you for your comprehensive explanation. I have a noisy time series with missing data and outliers. Not even sure if the data is normal. Does standardization works in my case? My sample size can be quite big. Looking forward to your feedback.
Hmm, perhaps not. But I generally recommend testing and getting data rather than using opinions/advice. I’m often wrong.
Perhaps try to patch the missing data and trim the outliers as secondary steps and see if that impacts model skill.
Let me know how you go.
Hi! I did not manage to max the MinMaxScaler work for my tensors of rank 5. Someone knows how to scale across all dimensions of a tensor? I guess you could flatten it scale it and then reshape it back. but I prefer not to get lost with all the dimensions. What I did for now is to make my own normalize to scale numpy tensors if someone bumps into the same problem.
class normalize():
def fit(self, train, interval=(0,1)):
self.min, self.max = train.min(), train.max()
self.interval = interval
return self
def transform(self, train, val, test):
def trans(x):
y = ((self.interval[1]-self.interval[0])*x + \
(self.interval[0]*self.max-self.interval[1]*self.min)) / \
(self.max-self.min)
return y
train_norm = trans(train)
val_norm = trans(val)
test_norm = trans(test)
return train_norm, val_norm, test_norm
def inverse_transform(self, train_norm, val_norm, test_norm):
def inv_trans(y):
x = ((self.max-self.min)*y + \
(self.interval[1]*self.min-self.interval[0]*self.max)) / \
(self.interval[1]-self.interval[0])
return x
train = inv_trans(train_norm)
val = inv_trans(val_norm)
test = inv_trans(test_norm)
return train, val, test
The sklearn tools will apply across all columns of your data.
Hi,
Would it also be a valid approach to convert the time series to unit vectors before a machine operation such as clustering?
Thanks,
Joel
Perhaps, try it and see how it impacts model skill.
Hi @Jason,
1. What’s difference in Normalize and Standardize time-series data with other data?
2. Should we Normalize and Standardize or both to the time-series data?
3. Can you please update this very good tutorial with “How to save data Normalize and
Standardize and reuse same for test data”?
Thanks,
Khalid
No difference, other than you might need to account for an increasing level (trend).
Depends on the algorithm and the data, try both and evaluate the effects.
You can save a series using NumPy or Pandas.
Hi Jason,
I just would like to ask if the normalization of data only happens during training? During testing where no output data is provide do I need still to normalize data? Thank you.
Great question.
Any transforms performed to data prior to training must also be performed to test or any other data.
What kind of preprocessing required for traffic flow analysis based on time series data? I am referring Highways agency network journey time and traffic flow data of 9 fields namely Link reference, Link description, Date, Time Period, Average journey time, Average speed, Data Quality, Link Length, Flow etc. Which techniques to try for preprocessing such data of 3 months Jan – March 2015? Thanks
Perhaps try a few methods and see what results in models with better skill.
Have you tried using the (from sklearn.preprocessing import Imputer) function?
Is it better than this or are they the same?
I have an example of using the Imputer here:
Choose the method that you prefer.
On what basis we choose data scaling method(Normalization/Standardization)?
Standardization is for gaussian data.
Normalization can be used for gaussian or non-gaussian.
Scaling is appropriate for methods that use distances or weightings.
If in doubt, compare model skill with and without scaling.
using the MinMaxScaler model in sklearn to normalize the features of a model during the training session, one can save the scaler and load it later from a file in forecasting session, for example, is that Possible? is that a good solution
Is there another more efficient way?
I would recommend saving the coefficients or saving the object via pickle.
Thanks for your advice
Hi Jason, great article.
One question that always bugs me, what is the proper way to standardize data in case when you have multiple instances with multiple parameters each? For example, you are measuring M parameters (time series) for N devices, during T seconds, and you want to perform some analysis/ML on these devices. How would you standardize the data in this case?
Thanks!
It comes down to how you want to model the data, e.g. one model per sensor, group of sensors, all sensors.
Standardize by variable and model.
Does that help?
Hi Jason,
Thank you for your response. So the idea is to have one model which includes values of all parameters (sensors) to be able to integrate the relation between parameters as well.
When you say standardize by variable and model, what would that mean in this case? Find Min/Max or Mean/StdDev for all values of a single parameter (belonging to different instances)? So one statistic for all values of a single parameter for all instances? Standardizing parameter per “per-dataset” and not “per-instance”?
Thanks again!
Yes, each variable and model, if you choose to model the sensors separately.
Hi
Thank you for the post, it was really helpful.
I am using sklearn’s Normalizer for normalization of data before prediction. How do we revert back the predicted data to the original values?
Thank you
Call the invert_transform() function.
Hi Jason,
How would you normalize columns in a dataframe that contains NaN values?
Thanks,
Tom
Ignore/replace/remove/impute the nan values.
Hello Jason,
Are you normalizing/standardizing only the value field? Or the time series as well?
Also, I ran my standarzing/normalizing on my value field and it reported back the exact same histogram when plotted, is this not meant to normalize the data??
Thanks
For univariate data, we prepare the entire series.
The shape will be the same, but the min/max will be different when normalizing and standardizing a Gaussian.
Hi Jason
Thanks for the great tutorial. I keep getting the error “cannot reshape array of size 7300 into shape (3650,1).
I get this error with other sample datasets I tried.
Is there something that I am missing?
Perhaps this tutorial will help you understand numpy arrays:
Hi Jason.
I have a n*n*n matrix contain a sample, time-steps, features for the stock market.
How would you Standardize Time Series Data in n*n dimension and prepare for LSTM?
Thanks,
Not sure what your data dimensions represent. Generally, rescale each series, e.g. series-wise.
Hello Jason,
Great tutorial. I have a question though, there can be multiple outliers which can affect mean and in turn affect both normalization as well as standardization, so why don’t we use median? as it is less prone to outliers and will produce more robust results?
Sounds like a good idea for Gaussian distributed data with outliers.
Should we also normalise the output (y)? Or is it good to have only the inputs (X) normalised?
Yes, the output or target variable should be standardized or normalized.
what about Date will it get normalized automatically?
Date is removed.
Great article.
Jason, one question regarding the difference between the results achied after applying the standartization and stationarity transformation. isn’t it true that data is deemed stationary if it is centerd around the mean and the variation is stable…which , as it appears , is the result of the standartization transformation
Thanks in advance for the clarification and sorry for the silly questions )
It is stationary if it does not have a systematic structure, like trend or seasonality:
thanks for another great article.
why not “scaling down” but normalizing?
x………………….y
if we have negative values in an array [5, -7, -3, 6, 7, 9, 10]
after scaling down…………………………..[..-0.7……………….1]
if i normalize…………………………………..[…..0………………..1]
in scaling down I keep the weight of -7 (being -0.7) to pull my Fx to West/negative direction
in normalizing -7 becomes 0 and 10 becomes 1….
-7 will have no weight in
Yes, often very few values are on the boundary. If you have many values on the boundarty, perhaps normalize to a different range.
-7 will have no weight in my equation -0.7x vs 0.x….Meanwhile 10y will be 1.y pulling all the way to East…
Yes, perhaps just standardize or shift the normalized range to [-1, 1] or [0.1, 0.9] and see if it makes a difference to the model skill.
Hi Jason
Its me again. your avid follower.
Does a predicted values (target values) using normalise feature variables is a normalise value? or do we need to transpose the predicted value ?
Thanks again.
What does “transpose the predicted value” mean?
Hi Jason,
Sorry to make you confuse. What I mean when we make prediction on the hold out target value not known to us, and our predictor or features variables are normalise. Does the output of our prediction are normalise value and we need to inverse before submitting to the competition?
Thank you Jason.
Yes.
Does the scaling range dependent on the activation function? For example, I have X_train and Y_train. X_train and Y_train have positive and negative values between (-2,2). Tanh activation function can have a value between (-1,1). Similarly, ReLU can have an only a positive value greater than 1.
Shall I scale my data to (-1,1) range if I am using Tanh activation function? Similarly, is it necessary to scale the data between (0,1) for ReLU activation function?
I am having an issue with using ReLU activation function for data scaled between (0,1). The solution diverges for (0,1) scaling with ReLU activation function. However, if I use (-1,1) scaling ReLU activation function, the solution does not diverge.
Thank you.
Typically, yes.
Although, I recommend using whatever gives the best result, rather than what is dogma.
Thank you, Jason. I was wondering if the scaling can a problem with blowing up the prediction results? If I change the scaling range, the solution does not blow up. Or, can the blowing up of the prediction related to the choice of activation function?
Thank you.
Yes, both. I recommend carefully choosing the activation function, then scale the data accordingly. Then vary the scale/config to see how it impacts model performance.
Hello,
I would like to ask this: Isn’t the way you scale data (ie using the entire training set) essentially using information from the future to scale information from the past? I mean, if you use the data points after say 28/02 and get the mean of the entire dataset and use it to scale data points before 28/02, aren’t you essentially ‘cheating’ by using information not actually available at a point before the end of February? Isn’t it like using (unavailable) information from (the future) 2020 to scale my data now, in (the present) 2019?
Thanks!
Correct, you should prepare the scaling on the training set, and apply it to train and test.
I typically do it in one shot for brevity.
Appreciate the answer! I also would like to ask, if I create multiple features based on my original time series (eg moving averages of various periods of stock price returns), so I end up with multivariate time series, how should I conduct my scaling (for train and test sets)? Should I scale my original time series train set and then create the other features? Or in some other way?
Thanks!
Scale first, then shift.
Do you mean scale on the entire original time series dataset prior to the train/test set split? Or on the original dataset’s part, which I will keep for training?
Also, is there any chance you could include a small chapter in the post dealing with multivariate, time series or not, scaling? Or is it obvious how to do it and I am missing something?
Thanks a lot!
Calculate scaling stats on the training dataset, scale columns before transforming into a supervised learning problem.
Here’s an example:
Ok, so if I understand correctly the workflow should be like this, and please correct me if I am wrong:
Initial Univariate Time Series -> Calculate scaling stats on the training part of it (say 90%) -> Apply scaling on the entire univariate Time Series -> Then calculate the various other indicators turning the dataset to multivariate. (This makes more sense to me).
OR
Initial Univariate Time Series -> Calculate various indicators, turning the dataset to multivariate -> Scale each column separately according to its training size (eg 90%).
Also, I would like to ask when to inverse transform my model’s predictions to calculate the error metrics, as this is a bit confusing for me as well. Do I have to do an inverse transform on my y_pred and compare that with the original y_test? I believe you had a post about this as well but cannot remember which one it was.
Thanks a lot, and sorry for bombarding you!
I think scale first, more on order here:
Try both and see what works best for your specific dataset.
The above linked tutorial shows how to reverse the operations.
Is it possible to get exactly the same values from minmax normalization and standardization for a timeseries?
I don’t see why not.
Jason,
Is it an acceptable practice to Standartize data and then apply power transformation to that data? Or usually, it’s one or another?
I would do the power transform first. More on the order of transforms here:
Thank you for the link. I have seen it. I wonder if there is a paper that supports power transformation for data prior to fitting ANN and explains the effect it has on the final results (better accuracy/recall/precision)
All I was able to find was that since ANN is non-parametric, then PT is not required.
Probably not, it is a very general technique. It is described in many textbooks, for example:
Reason why I am asking is that I have applied standardization for lasso feature selection. And now I would like to apply PT to those selected lags. I cannot find anything similar in the literature. Thank you.
I want to save the normalized data as a file because I need to feed that data to CNN. How to do that?? Ideas will be appreciated.
You can save a numpy array using the savetext() function:
Hi Jason, do you have a method of combining a 10 minute sensor dataset with a fault dataset? I want to join them on the timestamp but the fault data only has a start and stop field. Thanks, Paul.
Perhaps a database join in SQL?
Perhaps an hstack() in numpy?
Hi Jason,
The data normalisation usually happens irrespective of target column, say minmax applies on each column individually, like-wise mean and sd specific to the column,
is there any method that normalise predictors based on target?
Thanks
Not that I’m aware.
Isn’t this just plain normalization and standardization ? I would image most time-series data are non-stationary meaning variance of the distribution changes over time. Is it good practice to Standardize over a rolling window to reflect this non-stationary data ?
Yes.
Yes, good question. Better to make the data nonstationary prior to scaling if possible.
Thanks for article. I have a question: if we have video classification and for each video we extract let’s say 50 frames, how can we apply standardization? I mean we should standardize each frame of each subject separately or all frames of one subject together or…
Yes, see this post:
Hi, thanks for your text.
I would like to ask it is not possible to expect the range of the data after performing standardisation right? the most likely is to be between [-3, 3] but is possible to fall over that range right?
Correct.
Nice post and great thanks for your efforts. May I ask one question? My data looks like this
I can plot the hist on this data and the hist looks like clearly not gaussian. I tried the boxcox transformation, but the transformed data still not like Gaussian distribution. Do you have any idea how to make this data look more Gaussian so that I can build a machine learning model to predict the future?
Thanks in advance.
Data does not have to be Gaussian to use ml methods. Many nonlinear methods make no assumptions about the form of the data – try a decision tree, SVM, or a knn for example!
Hello jason,
I have a dataset of rssi samples. Its a time series data containing a lot more noise content. Now, I am interested to extract patterns in it. How can I do it by applying normilization?
The above example shows how to normalize a time series dataset.
What problem are you having exactly?
Hi Jason,
Thanks for the post. I have a question concerning a model that I’m building. I have time series data that I’m inputting using a sliding window method. So each sample contains multiple values from the time series data, i.e. y(t) = [x(t),x(t-1),x(t-2)]. In a situation like this, do I need to normalize the time series data before or after applying the sliding window? I think the latter makes more sense because it makes sure that each feature in the dataset is normalized, but the former makes sure the time series data itself is normalized. I’d really be interested to hear your opinion on this. Thanks a lot in advance!
You can scale the data prior to making the data supervised/sliding window.
Hi Jason,
So we can fit_transform and transform, our train/test dataset respectively to input into training out model. After prediction, do we have to inverse_transform the predictions also since the model was trained on scaled data? Thank you!
Yes, I show how here:
Thank you! These articles have been very helpful. I feel like the more I read about time series forecasting and how to prepare my time-series data, the more confusing it can get. My current dataset consists of multivariate independent features, trying to predict 1 variable. I set up my training and test sets and scale/transform the sets before fitting it to a SVR model. Now I see some examples where people reshape before scaling/transforming like np.reshape(-1,1). Is this necessary and how does reshaping tie into multivariate features? This ties into an error I am getting such that since I am predicting 1 variable which ends up being a 1-d array. So as mentioned above, I tried to inverse_transform this y_pred but the sklearn transformer throws an error saying ‘expected 2d array.’
Thanks.
Yes, the sklearn transformers always expect 2d data, rows and cols.
Ah, I see. So reshaping isn’t necessary when working with multivariate features to a SVR model or why would we reshape our data input to a model?
Typically we reshape data to meet the expectations of an API, such as a model or a data transform.
More on reshaping in the general sense here:
Suppose our demand of milk product is in direct relation with rainfall. So
i have to estimate the day ahead demand of the product using rainfall data which is available to us.
What i am doing is, since the demand of 2019 and 2018 is different from the demand of 2020, i have standardized the demand of 2019 by subtracting the mean and divided by std deviation of January 2019 dataset. Then searched the corresponding demand wrt to a particular rainfall value. Now i have trained the model on standardized values then while making forecast i have to de standardized the series.
So my question is which series is to use to calculate the mean and std deviation for de standardization of the forecast dataset.
Please help
You can choose the framing of the prediction task.
For example, you can choose to make a prediction that will be standardized based on the prior months data or the year befores value, etc. As long as you are consistent it should be fine – if I understood your question correctly.
Hi Jason, Thanks for your reply.
What I am doing is using January 2019 Rainfall and sales dataset to forecast sales of January 2020 (so the test data is not available to me, I am making real-time forecasts). Since the sales of 2019 and 2020 are different (as every year sales are increasing) so to avoid this discrepancy I have standardized the training set i.e. January 2019 sales and trained the model on these standardized values with Rainfall data of January 2019. Now I have the Rainfall forecast of January 2020 (from NWP) to predict the sales of January 2020, here is my question-
Since I have trained the model on standardized values, so my forecast will also be standardized values like (-1.4, 0.8, 0.9 etc). So which dataset I should use to de-standardize my forecast ??
Please Help
If you standardize using jan 2019, then you inverse the transofrm using the same coefficients, from jan 2019.
Thank you for your article.
I got a question about standardization of your data when you want to do a cross-validation using the ‘holdout validation’ method. Do you compute the mean and the standard deviation over the whole data set (test data set + learning data set) and the mean/std for the test data set and another mean/std for the learning data set?
Ideally, per-variable of the training set.
Hi Jason. Thank you for this article, quite helpful. I Just have a question regarding standadization. Should it be perfomed on the whole dataset of just a subset on the data i.e. on the training data only?
You’re welcome.
It is fit on the training set then applied on the train and test sets. We do it this way to avoid data leakage.
Jason, when you say fitting on the training set and transform on the same and test dataset, aren’t we inducing a leakage within the training set itself.
I mean assuming the dataset is sorted in time, the first value in train set is now modified based on stats derived from future values within the same set.
So is a leakage within the training set fine because our model sees it as a whole when it comes to timestamped observations?
In supervised learning we assume observations are stationary/iid, that there is no “time” effect on observations.
If there is, then it must be addressed as you say or the effect must be removed and the data made stationary.
so does we need to normalize/standardize our testing data also?
means we have done feature scaling for training, but our testing data is coming sample by sample, so there we cannot apply feature scaling at each time instant!
please provide some valuable feedback.
Test data should be prepared using the same method as training data.
Can we normalize time series data using histogram binning Method?
You can use binning on a time series, yes.
Hello jason,
I have a data set contains The same Time series “Sensor readings” for different days and I want to make a deep learning model to predict this values so what I did that I splatted the data into Time series data according to the day then I normalized it separately (min-max) (the readings have different ranges for example the max value for the first day 100 but the max for second is 48) but I’m really confiused do I need to normalize it using the max/min over all days or what I did was right?” When I trained my model with the separately normalized Time series it gave me better results than when together normalized”
For time series, you will need to fit the transform on the training set and apply it to new data.
Test with and without the transform and use whatever works best.
sir in my research work i initialy forecastingvegetable price using nn so i hv to scale my price data before training ? and second doubt is which prprocessing technique to use for nnlike mlp.plz can u suggest me as early as possible.
It is a good idea to scale data prior to modeling.
It is also a good idea to try a suite of different data preparations in order to discover what works best for your dataset.
Hi Jason,
I want to apply Deep Neural Network and LSTM, In the Data Preparation phase I have attributes that contain the total number of medications such as Gastrointestinal Medication it contains the total number of Gastrointestinal medication ( the values to this attribute is 0,1,2), in another attribute in the dataset such as Cardiovascular Medication it contains the values (0,1,2,3,4,5,6,7,8,9,10), another attribute Ear, nose and oropharynx Medication contains 0,1 it means the patient used no drugs or one drug.
My question is Can I apply the data normalization in these attributes or not?
<>
The attribute that contains 0,1 definitely I will not apply the normalization. but in the attributes that contain 0,1,2 values, or the examples I mentioned previously do I have to apply the normalization ??
Thanks!!
Yes, normalization sounds like a good thing to try.
Compare model performance with and without the data preparation and use the approach that results in the best performance.
Ok I will, thanks alot | https://machinelearningmastery.com/normalize-standardize-time-series-data-python/ | CC-MAIN-2021-04 | refinedweb | 4,869 | 64.71 |
As Infrastructure as Code (IaC) has become more and more popular over recent times, there has always been that one aspect which has stood out to most developers - it hasn’t ever felt like real code. Offerings such as Terraform, for which you use HashiCorp’s own Configuration Language (HCL), always come with another learning curve and result in a solution that feels disconnected from the product we are buliding.
Fortunately, there are newer frameworks that help solve this problem. One of these is Pulumi. Out of the box, Pulumi has support for JavaScript, TypeScript, Python, Go and C# (Go and C# support is currently still in the preview stage).
In this post, I will explore getting started with Pulumi by using it (retrospectively) on a real project we developed here at Scott Logic - a React based internal communication web app, using AWS Congnito for authentication (and Azure Identity Provisioning), Lambda functions and DynamoDB. As part of this, I’ll compare the code I write to the equivalent Terraform solution that was originally implemented for the project and discuss why I think I’ll look to use Pulumi more often on future projects.
The code referred to throughout this post can be found on GitHub.
Getting Started
Pulumi has its own getting started guide on the website so I won’t be repeating those steps here. My example project will be deployed to AWS, implemented in Typescript and using CircleCI for automated CI/CD. The original project has the following architecture:
The green highlighted elements were created manually for the original project. As a result, there is no Terraform for them that I can use for comparison purposes and as such, haven’t been considered for this post. For the resources highlighted in blue and red, these were provisioned using either Terraform or Serverless. I’ll be provisioning these via Pulumi as part of this blog post.
With an idea of what resources I need, thoughts then turn to how best to structure my Pulumi project(s).
Side Note - Stacks, Names and State Management
Just before I get deeper into the topic of projects, I first need to discuss how Pulumi handles management of our resource state. Each Pulumi project that we implement is a represenation of resources that we require. Each execution of this project is known to Pulumi as a stack. The idea is that we can execute multiple versions of our project for different purposes. For example, we may want environment based stacks, or feature development based stacks. This gives us great flexibility in how we use Pulumi in our overall development process.
By default, Pulumi will use its own servers and online portal for your stacks. Whilst the UI for this is quite simple and intuitive, it isn’t ideal that we would be depending on another third party for this element of our application. Fortunately, we can configure Pulumi to use a remote backend, such as an AWS S3 bucket.
However, I discovered that this comes at a cost. When we use the Pulumi backend, it does some magic with our project and stacks in order to provide uniqueness for the stack names. For example, we can create a Pulumi project in one directory and initialise a “dev” stack for it. We can then create another project in a separate directory and initialise a “dev” stack for that. The Pulumi console will show the two projects, each with their own “dev” stack. In effect, the stacks have fully qualified names like “project1/dev” and “project2/dev”. The two stacks will also be able to reference each other if necessary as they share the same backend.
If we try this with an S3 bucket for our backend, we will get prevented from creating that second stack. This is because each stack file is put in the same location in our bucket and the project name doesn’t play a part in it. Therefore, we’ll have to manually create unique stack names ourselves. In my case, I chose to incorporate the project name, using a period as delimiter, e.g. infrastructure.dev.
An alternative approach would be to use separate backend locations for each project. However, this will prevent us from sharing resources between those projects via the stack outputs (as you will read later).
Project Structure
Eager to get into the code, I found myself starting with a single program to define my stack. I quickly realised this wouldn’t be practical and didn’t really reflect how I would build somehting using Typescript. In addition to that, I know that some of the resources won’t get updated as regularly as others. For example, the DynamoDB table structure probably won’t change that frequently. Whilst the lambda function code will evolve over time. It seems redundant to include the provisioning of the database in with the same code that manages our API gateway and lambdas.
Once broken down, I decided I would implement the following 3 Pulumi programs.
This is a simplified version of a fuller application. I haven’t included a number of supplementary elements here such as the IAM roles/policies.
Infrastructure Stack Code
Let’s get started on the detail of the first program - the core infrastructure. In my simplified case, all I need to define is a DynamoDB table:
import * as pulumi from "@pulumi/pulumi"; import * as aws from "@pulumi/aws"; const env = pulumi.getStack().split('.')[1]; export const table = new aws.dynamodb.Table( `blog-post-table-${env}`, { tags: { application: "pulumi-blog-post", environment: env }, attributes: [ { name: "RecordID", type: "S" }, { name: "Status", type: "S" }, { name: "Count", type: "N" } ], globalSecondaryIndexes: [ { name: "StatusIndex", hashKey: "Status", rangeKey: "Count", projectionType: "INCLUDE", nonKeyAttributes: ["Description", "CreatedOn"] } ], billingMode: "PAY_PER_REQUEST", hashKey: "RecordID", streamEnabled: true, streamViewType: "NEW_AND_OLD_IMAGES", ttl: { attributeName: "TimeToExist", enabled: false } } );
Upon executing this code, using the
pulumi up command within my project directory, Pulumi will provision a DynamoDB table, named “blog-post-table-dev” (assuming my stack name is “
Side Note -
getStack()
In an ideal world, the stack name would represent just the environment being deployed into. However, as previously mentioned, because of the backend being stored in AWS, we need to give each stack a unique name. In the case of my application, I chose a convention of “program.stack”, e.g. infrastructure.dev, infrastructure.test gateway.dev as so on. Using the inbuilt getStack() method from the Pulumi library provides access to this name. This is useful later for the cross-stack referencing that is required, but it is equally useful here so that the environment can be applied to the resource names and tags, albeit with a little bit of string manipulation.
Side Note Physical Names versus Logical Names
The default behaviour for Pulumi, when provisioning a resource, is to append a 7 digit hex value to the end of the name specified in the code. This provides a unique physical name for each resource. It does this to improve the efficiency of updates - it can provision the new version of the resource, leaving the current version in place until the new version is ready. Without this unique naming convention, Pulumi would need to destroy the existing resource before creating the new version, potentially leading to unexpected downtime of the resource. This behaviour can be overridden by specifying the “name” field (check the documentation for particular resources as it may be a different field, e.g for S3 buckets, the field is “bucket”). If doing this, the “deleteBeforeReplace” field also needs to be set in the resource properties.
Gateway Stack Code
After this, all that is required is to specify what the stack should provide as outputs. This is done in the index.ts file:
import { table } from "./table"; export const tableName = table.name;
Now that table name value will be available to my gateway stack:
const env = pulumi.getStack().split('.')[1]; const infraConfig = new pulumi.StackReference(`infrastructure.${env}`); const tableName = infraConfig.getOutput("tableName");
The (partial) API gateway definition looks like this:
... async function getRecord(event: awsx.apigateway.Request): Promise<awsx.apigateway.Response> { const dbClient = new DynamoDB.DocumentClient(); const dbTableName = tableName.get(); return getRecordHandler(dbClient, dbTableName, event); // The actual lambda code } ... const apiGateway = new awsx.apigateway.API( "pulumi-blog-post-api", { stageName: env, routes: [ { path: "points/get", method: "GET", eventHandler: getRecord }, { path: "raise", method: "POST", eventHandler: postRecord } ] } ); export const apiUrl = apiGateway.url;
Notice that I have another output from this stack - the URL of my gateway. This is needed by the front-end application to determine the endpoints of the lambda functions.
Integration With CircleCI
Using Pulumi in CircleCI is simply a case of pulling in the relevant Orbs. If you aren’t familiar with CircleCI Orbs, these are reusable packages of YAML based configuration that can help speed up project setup and allow eay integration with third party tools - take a look here for more information.
These give some simple commands that can be applied in the CircleCI config in order to manage the resources as part of the build and deployment process.
Firstly, we can use the login command to specify where the remote state is managed:
- pulumi/login: cloud-url: "s3://pulumi-blog-remote"
After that, the update command will carry out the actual deployment of the stack:
- pulumi/update: stack: infrastructure.dev working_directory: ~/backend-api/infrastructure
Note that each of these are entries in a single CircleCI step.
In the case of my backend API repository, I have two separate stacks needing deployed. All that meant was the addition of another Pulumi update command in my step (alternatively, I could have separate steps for each of my stacks:
- pulumi/update: stack: gateway.dev working_directory: ~/backend-api/gateway
It really is that simple to get the stacks up and running.
When moving to the front-end application, things become a little more complicated. As I have a react app that needs to be built as part of the CircleCI definition, I need to pass the API url that was an output from the gateway stack as part of the build process.
- pulumi/login: cloud-url: "s3://pulumi-blog-remote" - run: command: echo 'export APIURL=$(pulumi stack -s gateway.dev output apiUrl)' >> $BASH_ENV - run: name: "npm build" command: | cd ~/frontend-app/app npm install REACT_APP_STAGE=$CIRCLE_BRANCH REACT_APP_LAMBDA_ENDPOINT=${APIURL} npx react-scripts build
Notice the stage in this step where I issue a command to get the gateway stack output value and push it into the bash environment variables. Through all my searching, I couldn’t find another solution to this problem - where the value I need to pass to my build is the output from a command line statement. It appears to be unsupported by CircleCI at this time.
There are additional steps in the front end CircleCI config for provisioning the S3 bucket and deploying the application into the bucket, but the former is a repeat of what I did on the backend and the latter has no interaction with the Pulumi environment.
Comparison To Terraform
One of the purposes of writing this post was going to compare the Pulumi code I wrote to the equivalent Terraform that was implemented as part of the original project.
First, the file structure. The below is a representation of the files and folders that we put in place for our Terraform code:
infrastructure └─ dev │ └─ database | | └─ main.tf │ └─ services │ │ └─ lambda-bucket | | | └─ main.tf │ │ └─ frontend-app │ │ | └─ main.tf └─ prod │ └─ database | | └─ main.tf │ └─ services │ │ └─ lambda-bucket | | | └─ main.tf │ │ └─ frontend-app │ │ | └─ main.tf └─modules │ └─ database | | └─ main.tf | | └─ variables.tf │ └─ services │ │ └─ lambda-bucket | | | └─ main.tf | | | └─ variables.tf │ │ └─ frontend-app │ │ | └─ main.tf | | | └─ variables.tf
The modules folder contains the common definitions for resource and services that we would be deploying “per environment”. The dev folder then contains environment specific versions of these where variable names are set along with any other environment specific inputs that the modules require. Not shown above is the repetition of the dev folder structure for the production environment (and again if we used a test or QA environment).
Just from this, we can see the number of files that are required to represent our AWS infrastructure using Terraform. Compare this to what I did above using Pulumi, where I had a single set of code files per resource and a single configuration file for each environment. This provided all the customisation I needed without the complexity of repeated folder structures, with its potential for copy and paste errors when I need a new environment.
The complete Terraform code for defining the database looks like this:
modules/database/main.tf
resource "aws_dynamodb_table" "application-database" { name = "${var.database_name}" billing_mode = "PAY_PER_REQUEST" hash_key = "RecordID" stream_enabled = true stream_view_type = "NEW_AND_OLD_IMAGES" tags = { ... } attribute { name = "RecordID" type = "S" } ttl { attribute_name = "TimeToExist" enabled = false } attribute { name = "Status" type = "S" } attribute { name = "Description" type = "S" } attribute { name = "Archived" type = "S" } global_secondary_index { ... } }
dev/database/main.tf
provider "aws" { region = "eu-west-2" } terraform { backend "s3" { bucket = "terraform-remote-backend" key = "dev/data-storage/terraform.tfstate" region = "eu-west-2" dynamodb_table = "terraform-remote-backend-locks" encrypt = true } } module "create-dynamo-database" { source = "../../modules/database" database_name = "my-table-name-dev" }
I certainly feel that the Pulumi equivalent is more readable and maintainable. On top of that, you may notice the dev file includes details of where to find the state backend. This needs to be repeated in every resource file that we have which really does add to the code bloat of this solution. The Pulumi solution to this, where I can specify my remote state as part of the CircleCI config feels much tidier.
Also, notice how each attribute of the DynamoDB table is defined as a separate field within the database definition. Again, it is these small things that just made the Pulumi solution a preferable approach to my IaC needs.
Conclusion
Ultimately, whatever option you go with for your IaC needs, you’ll be defining your resources as JSON objects (or something similar) and there isn’t too much difference between the available platforms. With Pulumi, we also get access to all the constructs of a real programming language. That means we can easily incorporate conditional resource creation (you might want extra logging resources in a test environment) or create multiples of the same resource using some sort of array mapping.
I’ve barely touched the surface of Pulumi in this post but already I have experienced how much easier defining my infrastructure can be when I use a language that I am familiar with. Not only am I able to produce more readable and maintainable code, but I also have a high level of confidence in how I should structure the code to make it as flexible and reusable as possible.
With regards to the lambda functions and the API gateway, I particularly like the integration between the code and the gateway provisioning. This is similar to what we implemented on the original project, where we used Serverless to define the gateway and its various route handlers. However, just like in the comparison to Terraform, with my Pulumi solution, it feels more like part of my application directly and I have no learning curve for new syntax or technology.
Pulumi isn’t as mature as the other IaC offerings just yet, and as such, doesn’t have quite the level of online content to support it. Much of my learnings came from the Pulumi documentation and blogs directly with little else written about it. This may prove a stumbling block for some more complex problems, but I hope over time, as the adoption level rises, so will the online support.
I definitely think I will be trying to use more of Pulumi in the future. There were several areas during this little example project that I found myself wanting to understand more or even find better ways of solving the problems I encountered so I’ll hopefully find some time to do that as well. | https://blog.scottlogic.com/2020/04/21/starting-with-pulumi.html | CC-MAIN-2021-43 | refinedweb | 2,641 | 52.8 |
12 November 2012 06:27 [Source: ICIS news]
By Fanny Zhang
SINGAPORE (ICIS)--China’s exports grew at double-digits in October with the pace of increase accelerating for the past three month, but the strong growth momentum may not be sustained as its major trading partners continue to struggle with their own economic and financial ills, analysts said on Monday.
The country’s overseas shipments in October totaled $175.6bn (€138.7bn), with the year-on-year growth at 11.6% – the first double-digit growth seen since July, official data showed.
In September, annual exports growth was at 9.9% – rising from 2.7% in August.
The rebound in recent months can be considered as a short-term phenomenon and may not continue into the coming months as the world economy has yet to recover, analysts said.
“Although the October export is better than expected, we think it’s mainly driven by Christmas orders and therefore, not sustainable,” said Zhang Junfeng, senior analyst at Shenzhen-based China Merchants Securities.
“US imports may soften in coming months amid the ‘fiscal cliff’, while [the] eurozone is still contracting,” he said, adding that under these circumstances, Chinese exports recovery will likely lose steam.
China’s commerce minister Chen Deming said the “many uncertainties in [the] international economies” spell “tough times” for Chinese exports in the coming months into next year.
Chen said that China may not be able to achieve its 10% export growth target this year. In the first 10 months of the year, the country’s exports were up by an average of 6.3% from the same period in 2011.
China’s trade surplus in October widened to $32.0bn from September’s $27.7bn, with total imports valued at $143.6bn, up by 2.4% year on year, official data showed.
The country, which is the world's second biggest economy, imported 23.7m tonnes of crude in October – the second highest on record after the 25.5m tonnes registered in May 2012. China is also a major importer of petrochemicals in Asia.
?xml:namespace>
( | http://www.icis.com/Articles/2012/11/12/9612994/china-oct-exports-grow-11.6-further-strength-doubted-analysts.html | CC-MAIN-2015-06 | refinedweb | 348 | 65.83 |
User Tag List
Results 1 to 2 of 2
Thread: help!! I'm stuck on something...
help!! I'm stuck on something...
I have the following form:
<form action="taf.php" method="post>
<input type="text" name="from_email">
<input type="text" name="email_1">
<input type="text" name="email_2">
<input type="text" name="email_3">
<input type="submit" name="submit">
</form>
In the PHP script 'taf.php' I want to make sure that all the email addresses are valid. Please tell me if what I have put down is correct!!
PHP Code:
function check_email ($from_email, $email_1, $email_2, $email_3) {
return (ereg('^[-!#$%&\'*+\\./0-9=?A-Z^_`a-z{|}~]+'.'@'.'[-!#$%&\'*+\\/0-9=?A-Z^_`a-z{|}~]+\.'.'[-!#$%&\'*+\\./0-9=?A-Z^_`a-z{|}~]+$',$from_email, $email_1, $email_2, $email_3));
}
if ($submit) {
if (! check_email ($from_email) || ! check_email ($email_1) || ! check_email ($email_2) || ! check_email ($email_3))
echo("One of the email addresses you provided was invalid!");
So then I would just have $from_email and $to_email
- Join Date
- Mar 2001
- Location
- Medina, OH
- 440
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
Well for starters by requiring the variables $from_email, $email_1, $email_2, and $email_3 to be there, you are then calling your function incorrectly. You'd call it like this:
PHP Code:
if (!check_email($from_email,$email_1,$email_2,$email_3)) {
echo "One of the...";
}
Kevin
Bookmarks | http://www.sitepoint.com/forums/showthread.php?33362-Need-some-help&goto=nextnewest | CC-MAIN-2017-17 | refinedweb | 211 | 75.5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.