text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
🌈 Terminal string styling
Project description
clr is a simple terminal string styling library. Its API is a port of the popular chalk module for javascript.
Install
$ pip install clr
Usage
import clr print(clr.red.bold('Hello world!'))
API
clr.style*[.style](*objects, sep=' ')
Chain styles and call the last one as a method with an argument. Order doesn’t matter, and later styles take precedence in case of a conflict, e.g. clr.red.yellow.green is equivalent to clr.green.
Multiple arguments will be separated by sep, a space by default.
Styles
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/clr/ | CC-MAIN-2018-39 | refinedweb | 121 | 67.86 |
Created on 2010-12-05.10:09:13 by otmarhumbel, last changed 2010-12-24.01:37:07 by pjenvey.
[submitted on behalf of Christian Blichmann]
Calling methods from an embedded Jython script does nothing when
using JSR-223 and Jython 2.5.2rc2, while Jython 2.2.1 just works fine.
- ------------- myscript/ScriptingTest.java -------------
package myscript;
import java.io.InputStream;
import java.io.InputStreamReader;
import javax.script.ScriptEngine;
import javax.script.ScriptEngineManager;
public class ScriptingTest {
public static void main(String[] args) {
try {
final ScriptEngineManager manager =
new ScriptEngineManager();
final ScriptEngine engine =
manager.getEngineByName("python");
final InputStream is =
ScriptingTest.class.getResourceAsStream(
"/myscript/myscript.py");
engine.eval(new InputStreamReader(is));
} catch (final Exception e) {
e.printStackTrace();
}
}
}
- ------------- myscript/PythonCallable.java -------------
package myscript;
public interface PythonCallable {
String getAString();
void callAVoid();
}
- ------------- myscript/myscript.py -------------
from myscript import PythonCallable as PythonCallable
class MyPythonCallable(PythonCallable):
def getAString(self):
return 'A string'
def callAVoid(self):
print 'Called a void method'
print 'getAString() returns: %s' % \
MyPythonCallable().getAString()
print 'callAVoid():'
MyPythonCallable().callAVoid()
- ------------------------------------------------
Using Jython 2.2.1, I get:
$ java -cp .:jython.jar:jython-engine.jar myscript.ScriptingTest
getAString() returns: A string
callAVoid():
Called a void method
The problem here lies in PyScriptEngineScope.__setitem__. All values that pass through it are first converted via value.__tojava__(Object.class)
In this case, the __tojava__ call is made on the user defined MyPythonCallable class. Its __tojava__ returns its Proxy object here, which is the cause of the havoc we're seeing
I'm not sure who's at fault -- Nick, what's the point of the __tojava__(Object.class) call here? Is it necessary in all cases?
We either need to add special cases to the scope__setitem__, or fix the __tojava__ result for user defined classes that have an underlying proxy. Might the latter break anything else?
I never really understood the full implications of __tojava__ so I think I was probably just copying something else (perhaps the previous JSR 223 implementation). Please feel free to change it as you wish.
It seems the original jsr223 impl had a similar issue to this one:
Its fix (which was really for Jython 2.2) basically added this line to __setitem__
if (!(obj instanceof PyClass)) {
obj = JythonScriptEngine.py2java(value);
}
This isn't the most efficient fix in the world (adding an instanceof check for any kind of assignment). But whatever -- I propose the attached patch with a similar fix
Their JythonScope class with their fix is here:
I mean, we're already doing a __tojava__(Object.class) for every assignment so an additional instanceof check shouldn't matter
fix+test applied in r7175 | http://bugs.jython.org/issue1681 | CC-MAIN-2014-35 | refinedweb | 429 | 60.82 |
, dene some
terms, give some examples, and then point to the Report for details. We suggest, however, that the
reader completely ignore the details until the Gentle Introduction has been completely read. On the
1
2
2 VALUES, TYPES, AND OTHER GOODIESened \x2.1" refer to sections in
the Report).]
Haskell is a typeful programming language:1 types are pervasive, and the newcomer is best o
becoming well aware of the full power and complexity of Haskell's type system from the outset. For
those whose only experience is with relatively \untypeful" languages such as Perl, Tcl, or Scheme,
this may be a diÆcult adjustment; for those familiar with Java, C, Modula, or even ML, the
adjustment should be easier but still not insignicant, since Haskell's type system is dierent and
somewhat richer than most. In any case, \typeful programming" is part of the Haskell programming
experience, and cannot be avoided.
2
Values, Types, and Other Goodies (innite-precision
integers), Char (characters), Integer->Integer (functions mapping Integer to Integer), as well
as the structured types [Integer] (homogeneous lists of integers) and (Char,Integer) (character,
integer pairs).
All Haskell values are dened by a series of equations. For example, the function
inc can be dened by the single equation:
inc n
= n+1
An equation is an example of a declaration. Another kind of declaration is a type signature declaration (x4.4.1), with which we can declare an explicit typing for inc:
inc
:: Integer -> Integer
We will have much more to say about function denitions in Section 3.
For pedagogical purposes, when we wish to indicate that an expression e1 evaluates, or \reduces," to another expression or value e2 , we will write:
e1
For example, note that:
)
e2
inc (inc 3)
)
5
Haskell's static type system denes the formal relationship between types and values (x4.1known: All type errors are detected at compile-time. Not all errors are caught by the type system;
an expression such as 1/0 is typable but its evaluation will result in an error at execution time.
Still, the type system nds many program errors at compile time, aids the user in reasoning about
programs, and also permits a compiler to generate more eÆcient code (for example, no run-time
type tags or tests are required).
The type system also ensures that user-supplied type signatures are correct. In fact, Haskell's
type system is powerful enough to allow us to avoid writing any type signatures at all;2 we say
that the type system infers the correct types for us. Nevertheless, judicious placement of type
signatures such as that we gave for inc is a good idea, since type signatures are a very eective
form of documentation and help bring programming errors to light.
[The reader will note that we have capitalized identiers that denote specic types, such as
Integer and Char, but not identiers that denote values, such as inc. This is not just a convention:
it is enforced by Haskell's lexical syntax. In fact, the case of the other characters matters, too: foo,
fOo, and fOO are all distinct identiers.]
2.1 Polymorphic Types
Haskell also incorporates polymorphic types|types that are universally quantied in some way
over all types. Polymorphic type expressions essentially describe families of types. For example,
(8a)[a] is the family of types consisting of, for every type a, the type of lists of a. Lists of
2
With a few exceptions to be described later.
4
2 VALUES, TYPES, AND OTHER GOODIES
integers (e.g. [1,2,3]), lists of characters (['a','b','c']), even lists of lists of integers, etc., are
all members of this family. (Note, however, that [2,'b'] is not a valid example, since there is no
single type that contains both 2 and 'b'.)
[Identiers such as a above are called type variables, and are uncapitalized to distinguish them
from specic types such as Int. Furthermore, since Haskell has only universally quantied types,
there is no need to explicitly write out the symbol for universal quantication, and thus we simply write [a] in the example above. In other words, all type variables are implicitly universally
quantied.]
Lists are a commonly used data structure in functional languages, and are a good vehicle for
explaining the principles of polymorphism. The list [1,2,3] in Haskell is actually shorthand for
the list 1:(2:(3:[])), where [] is the empty list and : is the inx operator that adds its rst
argument to the front of its second argument (a list).3 Since : is right associative, we can also
write this list as 1:2:3:[].
As an example of a user-dened function that operates on lists, consider the problem of counting
the number of elements in a list:
length
length []
length (x:xs)
:: [a] -> Integer
= 0
= 1 + length xs
This denition is almost self-explanatory. We can read the equations as saying: \The length of the
empty list is 0, and the length of a list whose rst rst element and xs to the rest of the list). If the match succeeds,
the right-hand side is evaluated and returned as the result of the application. If it fails, the next
equation is tried, and if all equations fail, an error results.
Dening]].
length [1,2,3]
length ['a','b','c']
length [[1],[2],[3]]
)
)
)
3
3
3
Here are two other useful polymorphic functions on lists that will be used later. Function head
returns the rst element of a list, function tail returns all but the rst.
3
: and [] are like Lisp's cons and nil, respectively.
5
2.2 User-Dened Types
head
head (x:xs)
:: [a] -> a
= x
tail
tail (x:xs)
:: [a] -> [a]
= xs
Unlike length, these functions are not dened for all possible values of their argument. A runtime
error occurs when these functions are applied to an empty list.
With polymorphic types, we nd that some types are in a sense strictly more general than
others in the sense that the set of values they dene (x4.1.3). In
comparison to a monomorphically typed language such as C, the reader will nd
specic. The existence of unique principal types is the hallmark feature of the Hindley-Milner type
system, which forms the basis of the type systems of Haskell, ML, Miranda,4 and several other
(mostly functional) languages.
2.2 User-Dened Types
We can dene our own types in Haskell using a data declaration, which we introduce via a series
of examples (x4.2.1).
An important predened type in Haskell is that of truth values:
data Bool
= False | True
The type being dened here is Bool, and it has exactly two values: True and False. Type Bool is
an example of a (nullary) type constructor, and True and False are (also nullary) data constructors
(or just constructors, for short).
Similarly, we might wish to dene a color type:
data Color
= Red | Green | Blue | Indigo | Violet
Both Bool and Color are examples of enumerated types, since they consist of a nite number of
nullary data constructors.
Here is an example of a type with just one data constructor:
data Point a
= Pt a a
Because of the single constructor, a type like Point is often called a tuple type, since it is essentially
4
\Miranda" is a trademark of Research Software, Ltd.
6
2 VALUES, TYPES, AND OTHER GOODIES
just a cartesian product (in this case binary) of other types.5 In contrast, multi-constructor types,
such as Bool and Color, are called (disjoint) union or sum types.
More importantly, however, Point is an example of a polymorphic type: for any type t, it
denes
Pt 'a' 'b'
Pt True False
:: Point Float
:: Point Char
:: Point Bool
On the other hand, an expression such as Pt 'a' 1 is ill-typed because 'a' and 1 are of die rst, it serves to make the link between a type and its
data constructor more obvious.]
2.2.1 Recursive Types
Types can also be recursive, as in the type of binary trees:
data Tree a
= Leaf a | Branch (Tree a) (Tree a)
Here we have dened dening the following types for Branch and
Leaf:
Branch
Leaf
5
:: Tree a -> Tree a -> Tree a
:: a -> Tree a
Tuples are somewhat like records in other languages.
7
2.3 Type Synonyms
With this example we have dened a type suÆciently rich to allow dening some interesting
(recursive) functions that use it. For example, suppose we wish to dene a function fringe that
returns a list of all the elements in the leaves of a tree from left to right. It's usually helpful to write
down the type of new functions rst; in this case we see that the type should be Tree a -> [a].
That is, fringe is a polymorphic function that, for any type a, maps trees of a into lists of a. A
suitable denition follows:
fringe
:: Tree a -> [a]
fringe (Leaf x)
= [x]
fringe (Branch left right) = fringe left ++ fringe right
Here ++ is the inx operator that concatenates two lists (its full denition will be given in Section
9.1). As with the length example given earlier, the fringe function is dened using pattern
matching, except that here we see patterns involving user-dened constructors: Leaf and Branch.
[Note that the formal parameters are easily identied as the ones beginning with lower-case letters.]
2.3 Type Synonyms
For convenience, Haskell provides a way to dene type synonyms; i.e. names for commonly used
types. Type synonyms are created using a type declaration (x4.2.2). Here are several examples:
type
type
type
data
String
Person
Name
Address
=
=
=
=
[Char]
(Name,Address)
String
None | Addr String
Type synonyms do not dene.
2.4 Built-in Types Are Not Special
Earlier we introduced several \built-in" types such as lists, tuples, integers, and characters. We have
also shown how new user-dened types can be dened. Aside from special syntax, are the built-in
types in any way more special than the user-dened dening them. For example,
the Char type might be written as:
8
2 VALUES, TYPES, AND OTHER GOODIES
data Char
= 'a' | 'b' | 'c' | ...
| 'A' | 'B' | 'C' | ...
| '1' | '2' | '3' | ...
...
-- This is not valid
-- Haskell code!
These constructor names are not syntactically valid; to x denitions, (x2.2).]
Similarly, we could dene Int (xed precision integers) and Integer by:
data Int
= -65532 | ... | -1 | 0 | 1 | ... | 65532 -- more pseudo-code
data Integer =
... -2 | -1 | 0 | 1 | 2 ...
where -65532 and 65532, say, are the maximum and minimum xed precision integers for a given
implementation. Int is a much larger enumeration than Char, but it's still nite! In contrast, the
pseudo-code for Integer is intended to convey an innite enumeration.
Tuples are also easy to dene playing this game:
data (a,b)
data (a,b,c)
data (a,b,c,d)
.
.
.
= (a,b)
= (a,b,c)
= (a,b,c,d)
.
.
.
-- more pseudo-code
Each declaration above denes a tuple type of a particular length, with (...) playing a role in
both the expression syntax (as data constructor) and type-expression syntax (as type constructor).
The vertical dots after the last declaration are intended to convey an innite number of such
declarations, re ecting the fact that tuples of all lengths are allowed in Haskell.
Lists are also easily handled, and more interestingly, they are recursive:
data [a]
= [] | a : [a]
-- more pseudo-code
We can now see clearly what we described about lists earlier: [] is the empty list, and : is the inx
9
2.4 Built-in Types Are Not Special
list constructor; thus [1,2,3] must be equivalent to the list 1:2:3:[]. (: is right associative.)
The type of [] is [a], and the type of : is a->[a]->[a].
[The way \:" is dened here is actually legal syntax|inx constructors are permitted in data
declarations, and are distinguished from inx operators (for pattern-matching purposes) by the fact
that they must begin with a \:" (a property trivially satised by \:").]
At this point the reader should note carefully the dierences between tuples and lists, which
the above denitions make abundantly clear. In particular, note the recursive nature of the list
type whose elements are homogeneous and of arbitrary length, and the non-recursive nature of a
(particular) tuple type whose elements are heterogeneous and of xed].
2.4.1 List Comprehensions and Arithmetic Sequences denition:
[1..10]
[1,3..10]
[1,3..]
)
)
)
[1,2,3,4,5,6,7,8,9,10]
[1,3,5,7,9]
[1,3,5,7,9, ...
(innite sequence)
More will be said about arithmetic sequences in Section 8.2, and \innite lists" in Section 3.4.
10
3 FUNCTIONS
2.4.2 Strings
As another example of syntactic sugar for built-in types, we note that the literal string "hello" is
actually shorthand for the list of characters ['h','e','l','l','o']. Indeed, the type of "hello"
is String, where String is a predened type synonym (that we gave as an earlier example):
type String
= [Char]
This means we can use predened polymorphic list functions to operate on strings. For example:
"hello" ++ " world"
3
"hello world"
)
Functions
Since Haskell is a functional language, one would expect functions to play a major role, and indeed
they do. In this section, we look at several aspects of functions in Haskell.
First, consider this denition of a function which adds its two arguments:
add
add x y
:: Integer -> Integer -> Integer
= x + y
This is an example of a curried function.6 dene
inc in a die
map f []
map f (x:xs)
:: (a->b) -> [a] -> [b]
= []
= f x : map f xs
[Function application has higher precedence than any inx operator, and thus the right-hand side
of the second equation parses as (f x) : (map f xs).] The map function is polymorphic and
its type indicates clearly that its rst argument is a function; note also that the two a's must be
instantiated with the same type (likewise for the b's). As an example of the use of map, we can
increment the elements in a list:
map (add 1) [1,2,3]
)
[2,3,4]
6
The name curry derives from the person who popularized the idea: Haskell Curry. To get the eect of an
uncurried function, we could use a tuple, as in:
add (x,y)
= x + y
But then we see that this version of add is really just a function of one argument!
11
3.1 Lambda Abstractions
These examples demonstrate the rst-class nature of functions, which when used in this way
are usually called higher-order functions.
3.1 Lambda Abstractions
Instead of using equations to dene functions, we can also dene
add x y
= x+1
= x+y
are really shorthand for:
inc
add
= \x -> x+1
= \x y -> x+y
We will have more to say about such equivalences later.
In general, given that x has type t1 and exp has type t2 , then \x->exp has type t1 ->t2 .
3.2 Inx Operators
Inx operators are really just functions, and can also be dened using equations. For example, here
is a denition of a list concatenation operator:
(++)
[]
++ ys
(x:xs) ++ ys
:: [a] -> [a] -> [a]
= ys
= x : (xs++ys)
[Lexically, inx operators consist entirely of \symbols," as opposed to normal identiers which are
alphanumeric (x2.4). Haskell has no prex operators, with the exception of minus (-), which is
both inx and prex.]
As another example, an important inx operator on functions is that for function composition:
(.)
f . g
:: (b->c) -> (a->b) -> (a->c)
= \ x -> f (g x)
3.2.1 Sections
Since inx operators are really just functions, it makes sense to be able to partially apply them as
well. In Haskell the partial application of an inx operator is called a section. For example:
(x+)
(+y)
(+)
\y -> x+y
\x -> x+y
\x y -> x+y
12
3 FUNCTIONS
[The parentheses are mandatory.]
The last form of section given above essentially coerces an inx operator into an equivalent
functional value, and is handy when passing an inx operator as an argument to a function, as
in map (+) [1,2,3] (the reader should verify that this returns a list of functions!). It is also
necessary when giving a function type signature, as in the examples of (++) and (.) given earlier.
We can now see that add dened earlier is just (+), and inc is just (+1)! Indeed, these
denitions would do just ne:
inc
add
= (+ 1)
= (+)
We can coerce an inx operator into a functional value, but can we go the other way? Yes|we
simply enclose an identier bound to a functional value in backquotes. For example, x `add` y
is the same as add x y.7 Some functions read better this way. An example is the predened list
membership predicate elem; the expression x `elem` xs can be read intuitively as \x is an element
of xs."
[There are some special rules regarding sections involving the prex/inx operator -; see
(x3.5,x3.4).]
At this point, the reader may be confused at having so many ways to dene a function! The
decision to provide these mechanisms partly re ects historical conventions, and partly re ects the
desire for consistency (for example, in the treatment of inx vs. regular functions).
3.2.2 Fixity Declarations
A xity declaration can be given for any inx operator or constructor (including those made from
ordinary identiers, such as `elem`). This declaration species a precedence level from 0 to 9 (with
9 being the strongest; normal application is assumed to have a precedence level of 10), and left-,
right-, or non-associativity. For example, the xity declarations for ++ and . are:
infixr 5 ++
infixr 9 .
Both of these specify right-associativity, the rst with a precedence level of 5, the other 9. Left
associativity is specied via infixl, and non-associativity by infix. Also, the xity of more than
one operator may be specied with the same xity declaration. If no xity declaration is given for
a particular operator, it defaults to infixl 9. (See x5.9 for a detailed denition of the associativity
rules.)
3.3 Functions are Non-strict
Suppose bot is dened by:
7
Note carefully that add is enclosed in backquotes, not apostrophes as used in the syntax of characters; i.e. 'f' is
a character, whereas `f` is an inx operator. Fortunately, most ASCII terminals distinguish these much better than
the font used in this manuscript.
13
3.4 \Innite" Data Structures
bot
= bot
In other words, bot is a non-terminating expression. Abstractly, we denote the value of a nonterminating expression as ? (read \bottom"). Expressions that result in some kind of a run-time
error, such as 1/0, also have this value. Such an error is not recoverable: programs will not continue
past these errors. Errors encountered by the I/O system, such as an end-of-le error, are recoverable
and are handled in a die i the value of f bot is ?. For most programming languages,
all functions are strict. But this is not so in Haskell. As a simple example, consider const1, the
constant 1 function, dened innite data structure.
Another way of explaining non-strict functions is that Haskell computes using denitions rather
than the assignments found in traditional languages. Read a declaration such as
v
= 1/0
as `dene v as 1/0' instead of `compute 1/0 and store the result in v'. Only if the value (denition)
of v is needed will the division by zero error occur. By itself, this declaration does not imply
any computation. Programming using assignments requires careful attention to the ordering of
the assignments: the meaning of the program depends on the order in which the assignments are
executed. Denitions, in contrast, are much simpler: they can be presented in any order without
aecting the meaning of the program.
3.4 \Innite" Data Structures denition of (conceptually) innite data structures. Here is
an innite list of ones:
14
3 FUNCTIONS
Figure 1: Circular Fibonacci Sequence
ones
= 1 : ones
Perhaps more interesting is the function numsFrom:
numsFrom n
= n : numsFrom (n+1)
Thus numsFrom n is the innite list of successive integers beginning with n. From it we can construct
an innite list of squares:
squares
= map (^2) (numsfrom 0)
(Note the use of a section; ^ is the inx exponentiation operator.)
Of course, eventually we expect to extract some nite portion of the list for actual computation,
and there are lots of predened functions in Haskell that do this sort of thing: take, takeWhile,
filter, and others. The denition rst n elements from a list:
take 5 squares
)
[0,1,4,9,16]
The denition of ones above is an example of a circular list. In most circumstances laziness
has an important impact on eÆciency, since an implementation can be expected to implement the
list as a true circular structure, thus saving space.
For another example of the use of circularity, the Fibonacci sequence can be computed eÆciently
as the following innite sequence:
fib
= 1 : 1 : [ a+b | (a,b) <- zip fib (tail fib) ]
where zip is a Standard Prelude function that returns the pairwise interleaving of its two list
arguments:
zip (x:xs) (y:ys)
zip xs
ys
= (x,y) : zip xs ys
= []
Note how fib, an innite list, is dened in terms of itself, as if it were \chasing its tail." Indeed,
we can draw a picture of this computation as shown in Figure 1.
For another application of innite lists, see Section 4.4.
15
3.5 The Error Function
3.5 The Error Function denition of head taken from the Standard Prelude is:
head (x:xs)
head []
4
= x
= error "head{PreludeList}: head []"
Case Expressions and Pattern Matching
Earlier we gave several examples of pattern matching in dening functions|for example length
and fringe. In this section we will look at the pattern-matching process in greater detail (x3.17).8
Patterns are not \rst-class;" there is only a xed set of dierent kinds of patterns. We have
already seen several examples of data constructor patterns; both length and fringe dened earlier
use such patterns, the former on the constructors of a \built-in" type (lists), the latter on a userdened type (Tree). Indeed, matching is permitted using the constructors of any type, user-dened9 are also patterns|it's just that they never fail to
match a value. As a \side eect" of the successful match, the formal parameter is bound to the
value it is being matched against. For this reason patterns in any one equation are not allowed
to have more than one occurrence of the same formal parameter (a property called linearity x3.17,
x3.3, x).
8
Pattern matching in Haskell is dierent from that found in logic programming languages such as Prolog; in
particular, it can be viewed as \one-way" matching, whereas Prolog allows \two-way" matching (via unication),
along with implicit backtracking in its evaluation mechanism.
9
The Report calls these variables.
16
4 CASE EXPRESSIONS AND PATTERN MATCHING
As-patterns. Sometimes it is convenient to name a pattern for use on the right-hand side of an
equation. For example, a function that duplicates the rst element in a list might be written as:
f (x:xs)
= x:x:xs
(Recall that \:" associates to the right.) Note that x:xs appears both as a pattern on the left-hand
side, and an expression on the right-hand side. To improve readability, we might prefer to write
x:xs just once, which we can achieve using an as-pattern as follows:10
f s@(x:xs)
= x:s
Technically speaking, as-patterns always result in a successful match, although the sub-pattern (in
this case x:xs) could, of course, fail.
Wild-cards. Another common situation is matching against a value we really care nothing about.
For example, the functions head and tail dened in Section 2.1 can be rewritten as:
head (x:_)
tail (_:xs)
= x
= xs
in which we have \advertised" the fact that we don't care what a certain part of the input is.
Each wild-card independently matches anything, but in contrast to a formal parameter, each binds
nothing; for this reason more than one is allowed in an equation.
4.1 Pattern-Matching Semantics
So far we have discussed how individual patterns are matched, how some are refutable, some are
irrefutable, etc. But what drives the overall process? In what order are the matches attempted?
What if none succeeds?, dened earlier, is a variable bound to ?.) But if [1,2] is matched
against [bot,0], then matching 1 against bot causes divergence (i.e. ?).
The other twist to this set of rules is that top-level patterns may also have a boolean guard, as
in this denition of a function that forms an abstract version of a number's sign:
sign x | x > 0
| x == 0
| x < 0
= 1
= 0
= -1
Note that a sequence of guards may be provided for the same pattern; as with patterns, they are
evaluated top-down, and the rst that evaluates to True results in a successful match.
10
Another advantage to doing this is that a naive implementation might completely reconstruct x:xs rather than
re-use the value being matched against.
17
4.2 An Example
4.2 An Example
The pattern-matching rules can have subtle eects on the meaning of functions. For example,
consider this den dened" with respect to its second argument, whereas take1 is more
dened with respect to its rst. It is diÆcult to say in this case which denition is better. Just
remember that in certain applications, it may make a dierence. (The Standard Prelude includes
a denition corresponding to take.)
4.3 Case Expressions
Pattern matching provides a way to \dispatch control" based on structural properties of a value.
In many circumstances we don't wish to dene a function every time we need to do this, but
so far we have only shown how to do pattern matching in function denitions. Haskell's case
expression provides a way to solve this problem. Indeed, the meaning of pattern matching in
function denitions is specied in the Report in terms of case expressions, which are considered
more primitive. In particular, a function denition of the form:
f p11 : : : p1k = e1
:::
f pn1 : : : pnk = en
where each pij is a pattern, is semantically equivalent to:
f x1 x2 : : : xk = case (x1, : : : , xk) of (p11 ; : : : ; p1k ) -> e1
:::
(pn1 ; : : : ; pnk ) -> en
where the xi are new identiers. (For a more general translation that includes guards, see x4.4.2.)
For example, the denition of take given earlier is equivalent to:
18
4 CASE EXPRESSIONS AND PATTERN MATCHING
take m ys
= case (m,ys) of
(0,_)
-> []
(_,[])
-> []
(n,x:xs)
-> x : take (n-1) xs
A point not made earlier is that, for type correctness, the types of the right-hand sides of a case
expression or set of equations comprising a function denition must all be the same; more precisely,
they must all share a common principal type.
The pattern-matching rules for case expressions are the same as we have given for function
denitions, so there is really nothing new to learn here, other than to note the convenience that
case expressions oer. Indeed, there's one use of a case expression that is so common that it has
special syntax: the conditional expression. In Haskell, conditional expressions have the familiar
form:
if e1 then e2 else e3
which is really short-hand for:
case e1 of True -> e2
False -> e3
From this expansion it should be clear that e1 must have type Bool, and e2 and e3 must have the
same (but otherwise arbitrary) type. In other words, if-then-else when viewed as a function has
type Bool->a->a->a.
4.4 Lazy Patterns
There is one other kind of pattern allowed in Haskell. It is called a lazy pattern, and has the form
~pat. Lazy patterns are irrefutable: matching a value v against ~pat always succeeds, regardless
of pat. Operationally speaking, if an identier in pat is later \used" on the right-hand-side, it will
be bound to that portion of the value that would result if v were to successfully match pat, and ?
otherwise.
Lazy patterns are useful in contexts where innite data structures are being dened recursively.
For example, innite lists are an excellent vehicle for writing simulation programs, and in this
context the innite.) Using streams to simulate the message sequences, the Haskell code corresponding to
this diagram is:
reqs
resps
= client init
19
4.4 Lazy Patterns
Figure 2: Client-Server Simulation rst request! In other words, the pattern matching is being done
\too early." One way to x this is to redene rst response to be generated; the engine is now
\primed", and the recursion takes care of the rest.
As an example of this program in action, if we dene:
init
next resp
process req
then we see that:
= 0
= resp
= req+1
take 10 reqs
)
[0,1,2,3,4,5,6,7,8,9]
As another example of the use of lazy patterns, consider the denition.]
20
4 CASE EXPRESSIONS AND PATTERN MATCHING
Now, using the same reasoning as earlier, we should be led to believe that this program will
not generate any output. Curiously, however, it does, and the reason is simple: in Haskell, pattern
bindings are assumed to have an implicit ~ in front of them, re ecting the most common behavior
expected of pattern bindings, and avoiding some anomalous situations which are beyond the scope
of this tutorial. Thus we see that lazy patterns play an important role in Haskell, if only implicitly.
4.5 Lexical Scoping and Nested Forms
It is often desirable to create a nested scope within an expression, for the purpose of creating local
bindings not seen elsewhere|i.e. some kind of \block-structuring" form. In Haskell there are two
ways to achieve this:
Let Expressions. Haskell's let expressions are useful whenever a nested set of bindings is required. As a simple example, consider:
let y = a*b
f x = (x+y)/y
in f c + f d
The set of bindings created by a let expression is mutually recursive, and pattern bindings are
treated as lazy patterns (i.e. they carry an implicit ~). The only kind of declarations permitted
are type signatures, function bindings, and pattern bindings.
Where Clauses. Sometimes it is convenient to scope bindings over several guarded equations,
which requires a where clause:
f x y | y>z
| y==z
| y<z
where z = x*x
= ...
= ...
= ...
Note that this cannot be done with a let expression, which only scopes over the expression which
it encloses. A where clause is only allowed at the top level of a set of equations or case expression.
The same properties and constraints on bindings in let expressions apply to those in where clauses.
These two forms of nested scope seem very similar, but remember that a let expression is an
expression, whereas a where clause is not|it is part of the syntax of function declarations and case
expressions.
4.6 Layout (x2.7, x.11 x2.7.
The use of layout greatly reduces the syntactic clutter associated with declaration lists, thus
enhancing readability. It is easy to learn, and its use is encouraged.
5
Type Classes and Overloading
There is one nal:
11
Haskell observes the convention that tabs count as 8 blanks; thus care must be taken when using an editor which
may observe some other convention.
22
5 TYPE CLASSES AND OVERLOADING
The literals 1, 2, etc. are often used to represent both xed and arbitrary precision integers.
Numeric operators such as + are often dened to work on many dierent kinds of numbers.
The equality operator (== in Haskell) usually works on numbers and many other (but not all)
types.
Note that these overloaded behaviors are dierent for each type (in fact the behavior is sometimes
undened, or error), whereas in parametric polymorphism the type truly does not matter (fringe,
for example, really doesn't care what kind of elements are found in the leaves of a tree). In Haskell,
type classes provide a structured way to control ad hoc polymorphism, or overloading.
Let's start with a simple, but important, example: equality. There are many types for which we
would like equality dened, but some for which we would not. For example, comparing the equality
of functions is generally considered computationally intractable, whereas we often want to compare
two lists for equality.12 To highlight the issue, consider this denition of the function elem which
tests for membership in a list:
x `elem` []
x `elem` (y:ys)
= False
= x==y || (x `elem` ys)
[For the stylistic reason we discussed in Section 3.1, we have chosen to dene elem in inx form.
== and || are the inx operators for equality and logical or, respectively.]
Intuitively speaking, the type of elem \ought" to be: a->[a]->Bool. But this would imply that ==
has type a->a->Bool, even though we just said that we don't expect == to be dened for all types.
Furthermore, as we have noted earlier, even if == were dened on all types, comparing two
lists for equality is very dierent from comparing two integers. In this sense, we expect == to be
overloaded to carry on these various tasks.
Type classes conveniently solve both of these problems. They allow us to declare which types
are instances of which class, and to provide denitions of the overloaded operations associated with
a class. For example, let's dene a type class containing an equality operator:
class Eq a where
(==)
:: a -> a -> Bool
Here Eq is the name of the class being dened, and == is the single operation in the class. This
declaration may be read \a type a is an instance of the class Eq if there is an (overloaded) operation
==, of the appropriate type, dened on it." (Note that == is only dened eect of the above class
declaration is to assign the following type to ==:
(==)
12
:: (Eq a) => a -> a -> Bool
The kind of equality we are referring to here is \value equality," and opposed to the \pointer equality" found,
for example, with Java's ==. Pointer equality is not referentially transparent, and thus does not sit well in a purely
functional language.
23 dened denition of == is called a method. The function integerEq happens to be the primitive
function that compares integers for equality, but in general any valid expression is allowed on the
right-hand side, just as for any other function denition. The overall declaration is essentially
saying: \The type Integer is an instance of the class Eq, and here is the denition of the method
corresponding to the operation ==." Given this declaration, we can now compare xed precision
integers for equality using ==. Similarly:
instance Eq Float where
x == y
= x `floatEq` y
allows us to compare
oating point numbers using ==.
Recursive types such as Tree den dened that is slightly larger than the one dened earlier:
class Eq a where
(==), (/=)
x /= y
:: a -> a -> Bool
= dened in
the class declaration, if it exists, is used instead. For example, the three instances of Eq dened
earlier will work perfectly well with the above class declaration, yielding just the right denition of
inequality that we want: the logical negation of equality.
24
5 TYPE CLASSES AND OVERLOADING
Haskell also supports a notion of class extension. For example, we may wish to dene denition of Ord taken from the Prelude.)
One benet dened in Section 2.4.1 is:
quicksort
:: (Ord a) => [a] -> [a]
In other words, quicksort only operates on lists of values of ordered types. This typing for
quicksort arises because of the use of the comparison operators < and >= in its denition. dierent classes.
Contexts are also allowed in data declarations; see x4.2.1.
Class methods may have additional class constraints on any type variable except the one dening \rst
25 classied into dierent kinds which take one of two possible forms:
The symbol represents the kind of type associated with concrete data objects. That is, if
the value v has type t , the kind of v must be .
If 1 and 2 are kinds, then 1 ! 2 is the kind of types that take a type of kind 1 and
return a type of kind 2 .
The type constructor Tree has the kind ! ; the type Tree Int has the kind . Members of the
Functor class must all have the kind ! ; a kinding error would result from an declaration such
as
instance Functor Integer where ...
since Integer has the kind . con icts occur. See
x4.1.1 and x4.6 for more information about kinds.
26
5 TYPE CLASSES AND OVERLOADING
A Dierent Perspective. Before going on to further examples of the use of type classes, it is
worth pointing out two other views of Haskell's type classes. The rst is by analogy with objectoriented programming (OOP). In the following general statement about OOP, simply substituting
type class for class, and type for object, yields a valid summary of Haskell's type class mechanism:
\Classes capture common sets of operations. A particular object may be an instance of a class,
and will have a method corresponding to each operation. Classes may be arranged hierarchically,
forming notions of superclasses and subclasses, dierent perspective can be gotten by considering the relationship between parametric and ad
hoc polymorphism. We have shown how parametric polymorphism is useful in dening families of
types by universally quantifying over all types. Sometimes, however, that universal quantication).
Comparison to Other Languages. The classes used by Haskell are similar to those used in
other object-oriented languages such as C++ and Java. However, there are some signicant dierences:
Haskell separates the denition of a type from the denition of the methods associated with
that type. A class in C++ or Java usually denes both a data structure (the member
variables) and the functions associated with the structure (the methods). In Haskell, these
denitions are separated.
The class methods dened by a Haskell class correspond to virtual functions in a C++ class.
Each instance of a class provides its own denition for each method; class defaults correspond
to default denitions for a virtual function in the base class.
Haskell classes are roughly similar to a Java interface. Like an interface declaration, a Haskell
class declaration denes a protocol for using an object rather than dening an object itself.
Haskell does not support the C++ overloading style in which functions with dierent types
share a common name.
The type of a Haskell object cannot be implicitly coerced; there is no universal base class
such as Object which values can be projected into or out of.
27
C++ and Java attach identifying information (such as a VTable) to the runtime representation
of an object. In Haskell, such information is attached logically instead of physically to values,
through the type system.
There is no access control (such as public or private class constituents) built into the Haskell
class system. Instead, the module system must be used to hide or reveal components of a
class.
6
Types, Again
Here we examine some of the more advanced aspects of type declarations.
6.1 The Newtype Declaration
A common programming practice is to dene dierent Num
instance. This would not be possible if Natural were dened.
28
6 TYPES, AGAIN
See section 4.2.3 of the report for a more discussion of the relation between newtype, data, and
type declarations.
[Except for the keyword, the newtype declaration uses the same syntax as a data declaration
with a single constructor containing a single eld. This is appropriate since types dened using
newtype are nearly identical to those created by an ordinary data declaration.]
6.2 Field Labels
The elds within a Haskell data type can be accessed either positionally or by name using eld labels .
Consider a data type for a two-dimensional point:
data Point = Pt Float Float
The two components of a Point are the rst and second arguments to the constructor Pt. A
function such as
pointx
pointx (Pt x _)
:: Point -> Float
= x
may be used to refer to the rst component of a point in a more descriptive way, but, for large
structures, it becomes tedious to create such functions by hand.
Constructors in a data declaration may be declared with associated eld names , enclosed in
braces. These eld names identify the components of constructor by name rather than by position.
This is an alternative way to dene Point:
data Point = Pt {pointx, pointy :: Float}
This data type is identical to the earlier denition of Point. The constructor Pt is the same in
both cases. However, this declaration also denes two eld names, pointx and pointy. These eld
names can be used as selector functions to extract a component from a structure. In this example,. The expression Pt {pointx=1, pointy=2}
is identical to Pt 1 2. The use of eld names in the declaration of a data constructor does not preclude the positional style of eld access; both Pt {pointx=1, pointy=2} and Pt 1 2 are allowed.
When constructing a value using eld names, some elds may be omitted; these absent elds are
undened.
Pattern matching using eld names uses a similar syntax for the constructor Pt:
absPoint (Pt {pointx = x, pointy = y}) = sqrt (x*x + y*y)
6.3 Strict Data Constructors
29
An update function uses eld values in an existing structure to ll, lling in the specied elds with new values.
[The braces used in conjunction with eld labels are somewhat special: Haskell syntax usually
allows braces to be omitted using the layout rule (described in Section 4.6). However, the braces
associated with eld names must be explicit.]
Field names are not restricted to types with a single constructor (commonly called `record'
types). In a type with multiple constructors, selection or update operations using eld names may
fail at runtime. This is similar to the behavior of the head function when applied to an empty list.
Field labels share the top level namespace with ordinary variables and class methods. A eld
name cannot be used in more than one data type in scope. However, within a data type, the same
eld name can be used in more than one of the constructors so long as it has the same typing in
all cases. For example, in this data type
data T = C1 {f :: Int, g :: Float}
| C2 {f :: Int, h :: Bool}
the eld elds can be added or
removed without changing every reference to the constructor. For full details of eld labels and
their semantics, see Section x4.2.1.
6.3 Strict Data Constructors
Data structures in Haskell are generally lazy : the components are not evaluated until needed. This
permits structures that contain elements which, if evaluated, would lead to an error or fail to
terminate. Lazy data structures enhance the expressiveness of Haskell and are an essential aspect
of the Haskell programming style.
Internally, each eld of a lazy data object is wrapped up in a structure commonly referred to
as a thunk that encapsulates the computation dening the eld value. This thunk is not entered
until the value is needed; thunks which contain errors (?) do not aect ags in
30
7 INPUT/OUTPUT
data declarations allow specic elds of a constructor to be evaluated immediately, selectively
suppressing laziness. A eld marked by ! in a data declaration is evaluated when the structure is
created instead of delayed in a thunk.
There are a number of situations where it may be appropriate to use strictness
ags:
Structure components that are sure to be evaluated at some point during program execution.
Structure components that are simple to evaluate and never cause errors.
Types in which partially undened values are not meaningful.
For example, the complex number library denes the Complex type as:
data RealFloat a => Complex a = !a :+ !a
[note the inx denition of the constructor :+.] This denition marks the two components, the
real and imaginary parts, of the complex number as being strict. This is a more compact representation of complex numbers but this comes at the expense of making a complex number with an
undened component, 1 :+ ? for example, totally undened (?). As there is no real need for
partially dened complex numbers, it makes sense to use strictness ags to achieve a more eÆcient
representation.
Strictness ags may be used to address memory leaks: structures retained by the garbage
collector but no longer necessary for computation.
The strictness ag, !, can only appear in data declarations. It cannot be used in other type
signatures or in any other type denitions. There is no corresponding way to mark function
arguments as being strict, although the same eect can be obtained using the seq or !$ functions.
See x4.2.1 for further details.
It is diÆcult to present exact guidelines for the use of strictness ags. They should be used
with caution: laziness is one of the fundamental properties of Haskell and adding strictness ags
may lead to hard to nd innite loops or have other unexpected consequences.
7
Input/Output
The I/O system in Haskell is purely functional, yet has all of the expressive power found in conventional programming languages. In imperative languages, programs proceed via actions which
examine and modify the current state of the world. Typical actions include reading and setting
global variables, writing les, reading input, and opening windows. Such actions are also a part of
Haskell but are cleanly separated from the purely functional core of the language.
Haskell's I/O system is built around a somewhat daunting mathematical foundation: the
monad . However, understanding of the underlying monad theory is not necessary to program
using the I/O system. Rather, monads are a conceptual structure into which I/O happens to t. It
is no more necessary to understand monad theory to perform Haskell I/O than it is to understand
group theory to do simple arithmetic. A detailed explanation of monads is found in Section 9.
31
7.1 Basic I/O Operations dened rather than invoked within the expression language of Haskell. Evaluating
the denition of an action doesn't actually cause the action to happen. Rather, the invocation of
actions takes place outside of the expression evaluation we have considered up to this point.
Actions are either atomic, as dened.
7.1 Basic I/O Operations x3.14.
The keyword do introduces a sequence of statements which are executed in order. A statement
is either an action, a pattern bound to the result of an action using <-, or a set of local denitions
introduced using let. The do notation uses layout in the same manner as let or where so we can
omit braces and semicolons with proper indentation. Here is a simple program to read and then
print a character:
main
main
:: IO ()
= do c <- getChar
putChar c
The use of the name main is important: main is dened: rst it reads in a character, binding the result to the variable
c, and then prints the character. Unlike a let expression where variables are scoped over all
denitions, the variables dened by <- are only in scope in the following statements.
32
7 INPUT/OUTPUT
There is still one missing piece. We can invoke actions and examine their results using do, but
how do we return a value from a sequence of actions? For example, consider the ready function
that reads a character and returns True if the character was a `y':
ready
ready
:: IO Bool
=
getLine
:: IO String
=.
7.2 Programming With Actions
I/O actions are ordinary Haskell values: they may be passed to functions, placed in structures, and
used as any other Haskell value. Consider this list of actions:
33
7.3 Exception Handling
todoList :: [IO ()]
todoList = [putChar 'a',
do putChar 'b'
putChar 'c',
do c <- getChar
putChar c]
This list doesn't actually invoke any actions|it simply holds them. To join these actions into a
single action, a function such as sequence_ is needed:
sequence_
:: [IO ()] -> IO ()
sequence_ []
= return ()
sequence_ (a:as) = do a
sequence as
This can be simplied by noting that do x;y is expanded to x >> y (see Section 9.1). This pattern
of recursion is captured by the foldr function (see the Prelude for a denition of foldr); a better
denition of sequence_ is:
sequence_
sequence_
:: [IO ()] -> IO ()
= foldr (>>) (return ())
The do notation is a useful tool but in this case the underlying monadic operator, >>, is more
appropriate. An understanding of the operators upon which do is built is quite useful to the
Haskell programmer.
The sequence_ function can be used to construct putStr from putChar:
putStr
putStr s
:: String -> IO ()
= sequence_ (map putChar s)
One of the dierences between Haskell and conventional imperative programming can be seen in
putStr. In an imperative language, mapping an imperative version of putChar over the string
would be suÆcient.
7.3 Exception Handling
So far, we have avoided the issue of exceptions during I/O operations. What would happen if
getChar encounters an end of le?13 To deal with exceptional conditions such as `le not found'
We use the term error for ?: a condition which cannot be recovered from such as non-termination or pattern
match failure. Exceptions, on the other hand, can be caught and handled within the I/O monad.
13
34
7 INPUT/OUTPUT
within the I/O monad, a handling mechanism is used, similar in functionality to the one in standard
ML. No special syntax or semantics are used; exception handling is part of the denition-le condition. By making IOError abstract,
new sorts of errors may be added to the system without a noticeable change to the data type. The
function isEOFError is dened'
getChar'
:: IO Char
= getChar `catch` (\e -> return '\n')
This is rather crude since it treats all errors in the same manner. If only end-of-le is to be
recognized,. The type
of ioError is
ioError
:: IOError -> IO a
It is similar to return except that it transfers control to the exception handler instead of proceeding
to the next I/O action. Nested calls to catch are permitted, and produce nested exception handlers.
Using getChar', we can redene getLine to demonstrate the use of nested handlers:
getLine'
getLine'
where
:: IO String
= catch getLine'' (\err -> return ("Error: " ++ show err))
getLine'' = do c <- getChar'
if c == '\n' then return ""
else do l <- getLine'
return (c:l)
7.4 Files, Channels, and Handles
35
The nested error handlers allow getChar' to catch end of le while any other error results in a
string starting with "Error: " from getLine'.
For convenience, Haskell provides a default exception handler at the topmost level of a program
that prints out the exception and terminates the program.
7.4 Files, Channels, and Handles le creates a handle (of type Handle) for use in I/O transactions. Closing the handle
closes the associated le:
type FilePath
openFile
hClose
data IOMode
=
::
::
=
String -- path names in the file system
FilePath -> IOMode -> IO Handle
Handle -> IO ()
ReadMode | WriteMode | AppendMode | ReadWriteMode
Handles can also be associated with channels : communication ports not directly attached to les. A
few channel handles are predened, including stdin (standard input), stdout (standard output),
and stderr (standard error). Character level I/O operations include hGetChar and hPutChar,
which take a handle as an argument. The getChar function used previously can be dened as:
getChar
= hGetChar stdin
Haskell also allows the entire contents of a le or channel to be returned as a single string:
getContents
:: Handle -> IO String
Pragmatically, it may seem that getContents must immediately read an entire le le as they are required by the computation.
In this example, a Haskell program copies one le to another:
36
7 INPUT/OUTPUT
main = do fromHandle <- getAndOpenFile "Copy from: " ReadMode
toHandle <- getAndOpenFile "Copy to: " WriteMode
contents <- hGetContents fromHandle
hPutStr toHandle contents
hClose toHandle
putStr "Done."
getAndOpenFile
:: String -> IOMode -> IO Handle
getAndOpenFile prompt mode =
do putStr prompt
name <- getLine
catch (openFile name mode)
(\_ -> do putStrLn ("Cannot open "++ name ++ "\n")
getAndOpenFile prompt mode)
By using the lazy getContents function, the entire contents of the le need not be read into
memory all at once. If hPutStr chooses to buer the output by writing the string in xed sized
blocks of characters, only one block of the input le needs to be in memory at once. The input le
is closed implicitly when the last character has been read.
7.5 Haskell and Imperative Programming
As a nal dierence:
37
the I/O monad for a minimal amount of top-level sequencing. The monad cleanly separates the
functional and imperative program components. In contrast, imperative languages with functional
subsets do not generally have any well-dened barrier between the purely functional and imperative
worlds.
8
Standard Haskell Classes
In this section we introduce the predened standard type classes in Haskell. We have simplied
these classes somewhat by omitting some of the less interesting methods in these classes; the Haskell
report contains a more complete description. Also, some of the standard classes are part of the
standard Haskell libraries; these are described in the Haskell Library Report.
8.1 Equality and Ordered Classes
The classes Eq and Ord have already been discussed. The denition of Ord in the Prelude is
somewhat more complex than the simplied version of Ord presented earlier. In particular, note
the compare method:
data Ordering
compare
= EQ | LT | GT
:: Ord a => a -> a -> Ordering
The compare method is suÆcient to dene all other methods (via defaults) in this class and is the
best way to create Ord instances.
8.2 The Enumeration Class
Class Enum has a set of operations that underlie the syntactic sugar of arithmetic sequences; for
example, the arithmetic sequence expression [1,3..] stands for enumFromThen 1 3 (see x3.10 for
the formal translation). We can now see that arithmetic sequence expressions can be used to
generate lists of any type that is an instance of Enum. This includes not only most numeric types,
but also Char, so that, for instance, ['a'..'z'] denotes the list of lower-case letters in alphabetical
order. Furthermore, user-dened enumerated types like Color can easily be given Enum instance
declarations. If so:
[Red .. Violet]
)
[Red, Green, Blue, Indigo, Violet]
Note that such a sequence is arithmetic in the sense that the increment between values is constant,
even though the values are not numbers. Most types in Enum can be mapped onto xed precision
integers; for these, the fromEnum and toEnum convert between Int and a type in Enum.
8.3 The Read and Show Classes
The instances of class Show are those types that can be converted to character strings (typically
for I/O). The class Read provides operations for parsing character strings to obtain the values they
may represent. ne ineÆcient. Specically, let's consider a
function to represent the binary trees of Section 2.2.1 as a string,, dened as shows with the null accumulator. This is the default denition of show in
the Show class denition:
show x
= shows x ""
We can use shows to dene a more eÆcient version of showTree,),
8.3 The Read and Show Classes
39 dene a parsing function for the string representation of binary trees
produced by showsTree. List comprehensions give us a convenient idiom for constructing such
parsers:14
readsTree
readsTree ('<':s)
readsTree s
:: (Read a) => ReadS (Tree a)
= [(Branch l r, u) | (l, '|':t) <- readsTree s,
(r, '>':u) <- readsTree t ]
= [(Leaf x, t)
| (x,t)
<- reads s]
Let's take a moment to examine this function denition in detail. There are two main cases to
consider: If the rst character of the string to be parsed is '<', we should have the representation
of a branch; otherwise, we have a leaf. In the rst case, calling the rest of the input string following
the opening angle bracket s, any possible parse must be a tree Branch l r with remaining string
u, subject to the following conditions:
1. The tree l can be parsed from the beginning of the string s.
2. The string remaining (following the representation of l) begins with '|'. Call the tail of this
string t.
3. The tree r can be parsed from the beginning of t.
4. The string remaining from that parse begins with '>', and u is the tail.
Notice the expressive power we get from the combination of pattern matching with list comprehension: the form of a resulting parse is given by the main expression of the list comprehension, the
rst two conditions above are expressed by the rst generator (\(l, '|':t) is drawn from the list
of parses of s"), and the remaining conditions are expressed by the second generator.
The second dening equation above just says that to parse the representation of a leaf, we parse
a representation of the element type of the tree and apply the constructor Leaf to the value thus
obtained.
14
An even more elegant approach to parsing uses monads and parser combinators. These are part of a standard
parsing library distributed with most Haskell systems.
40
8 STANDARD HASKELL CLASSES:
readsTree "<1|<2|3>>"
readsTree "<1|2"
)
)
[(Branch (Leaf 1) (Branch (Leaf 2) (Leaf 3)), "")]
[]
There are a couple of shortcomings in our denition of readsTree. One is that the parser is
quite rigid, allowing no white space before or between the elements of the tree representation; the
other is that the way we parse our punctuation symbols is quite dierent from the way we parse
leaf values and subtrees, this lack of uniformity making the function denition harder to read. We
can address both of these problems by using the lexical analyzer provided by the Prelude:
lex
:: ReadS String
lex normally returns a singleton list containing a pair of strings: the rstsTree s
:: (Read a) => ReadS (Tree a)
= [(Branch l r, x) | ("<", t)
(l, u)
("|", v)
(r, w)
(">", x)
++
[(Leaf x, t)
| (x, t)
<<<<<-
lex s,
readsTree t,
lex u,
readsTree v,
lex inx
41
8.4 Derived Instances
Alternatively, the Show instance could be dened in terms of showTree:
instance Show a => Show (Tree a) where
show t = showTree t
This, however, will be less eÆcient than the ShowS version. Note that the Show class denes default
methods for both showsPrec and show, allowing the user to dene either one of these in an instance
declaration. Since these defaults are mutually recursive, an instance declaration that denes neither
of these functions will loop when called. Other classes such as Num also have these \interlocking
defaults".
We refer the interested reader to x).
8.4 Derived Instances
Recall the Eq instance for trees we presented in Section 5; such a declaration is simple|and boring|
to produce: we require that the element type in the leaves be an equality type; then, two leaves are
equal i they contain equal elements, and two branches are equal i specied, in which case the list of names must be parenthesized and
the names separated by commas.]
The derived Ord instance for Tree is slightly more complicated than the Eq instance:
instance (Ord a) => Ord (Tree a)
(Leaf _)
<= (Branch _)
(Leaf x)
<= (Leaf y)
(Branch _) <= (Leaf _)
(Branch l r) <= (Branch l' r')
where
= True
= x <= y
= False
= l == l' && r <= r' || l <= l'
This species a lexicographic order: Constructors are ordered by the order of their appearance in
42
9 ABOUT MONADS-dened. In fact, we
should provide our own denitions denitional equality. Nevertheless, it is sometimes necessary to provide
Eq or Ord instances dierent from those that would be derived; probably the most important
example is that of an abstract data type in which die:
[Wednesday .. Friday]
[Monday, Wednesday ..]
[Wednesday, Thursday, Friday]
[Monday, Wednesday, Friday]
)
) dened by a derived Show instance is consistent
with the appearance of constant Haskell expressions of the type in question. For example, if we
add Show and Read to the deriving clause for type Day, above, we obtain
show [Monday .. Wednesday]
9
)
"[Monday,Tuesday,Wednesday]"
About Monads
Many newcomers to Haskell are puzzled by the concept of monads. Monads are frequently encountered in Haskell: the IO system is constructed using a monad, a special syntax for monads has
been provided (do expressions), and the standard libraries contain an entire module dedicated to
monads. In this section we explore monadic programming in more detail.
43
9.1 Monadic Classes].
9.1 Monadic Classes
The Prelude contains a number of classes dening monads are they are used in Haskell. These classes
are based on the monad construct in category theory; whilst the category theoretic terminology
provides the names for the monadic classes and operations, it is not necessary to delve into abstract
mathematics to get an intuitive understanding of how to use the monadic classes.
A monad is constructed on top of a polymorphic type such as IO. The monad itself is dened, denes a single operation: fmap. The map
function applies an operation to the objects inside a container (polymorphic types can be thought
of as containers for values of another type), returning a container of the same shape. These laws
apply to fmap in the class Functor:
fmap id
= id
fmap (f . g) = fmap f . fmap g
These laws ensure that the container shape is unchanged by fmap and that the contents of the
container are not re-arranged by the mapping operation.
The Monad class denes two basic operators: >>= (bind) and return.
infixl 1 >>, >>=
class Monad m where
(>>=)
::
(>>)
::
return
::
fail
::
m >> k
=
m a ->
m a ->
a -> m
String
(a -> m b) -> m b
m b -> m b
a
-> m a
m >>= \_ -> k
The bind operations, >> and >>=, combine two monadic values while the return operation injects
44
9 ABOUT MONADS
a value into the monad (container). The signature of >>= helps us to understand this operation:
ma >>= \v -> mb combines a monadic value ma containing values of type a and a function which
operates on a value v of type a, returning the monadic value mb. The result is to combine ma and
mb into a monadic value containing b. The >> function is used when the function does not need
the value produced by the rst monadic operator.
The precise meaning of binding depends, of course, on the monad. For example, in the IO
monad, x >>= y performs two actions sequentially, passing the result of the rst
=
do p <- e1; e2 =:
return a >>= k
= k a
m >>= return
= m
xs >>= return ., the zero value is [], the empty list. The I/O monad has no zero element and is not a
member of this class.
The laws governing the mplus operator are as follows:
m `mplus` mzero = m
mzero `mplus` m = m
The mplus operator is ordinary list concatenation in the list monad.
45
9.2 Built-in Monads
9.2 Built-in Monads
Given the monadic operations and the laws that govern them, what can we build? We have already
examined the I/O monad in detail so we start with the two other built-in monads. dened for lists. These
following three expressions are all die denition depends on the denition
mvLift2 f x y
:: (a -> b -> c) -> [a] -> [b] -> [c]
= do x' <- x
y' <- y
return (f x' y')
turns an ordinary function of two arguments (f) into a function over multiple values (lists of
arguments), returning a value for each possible combination of the two input arguments. For
example,
mvLift2 (+) [1,3] [10,20,30]
mvLift2 (\a b->[a,b]) "ab" "cd"
mvLift2 (*) [1,2,4] []
)
)
)
[11,21,31,13,23,33]
["ac","ad","bc","bd"]
[]
This function is a specialized version of the LiftM2 function in the monad library. You can think
of it as transporting a function from outside the list monad, f, into the list monad in which
computations take on multiple values.
The monad dened for Maybe is similar to the list monad: the value Nothing serves as [] and
Just x as [x].
46
9 ABOUT MONADS
9.3 Using Monads
Explaining the monadic operators and their associated laws doesn't really show what monads are
good for. What they really provide is modularity. That is, by dening an operation monadically,, the state
monad, and then build a more complex monad with a similar denition.
Brie y, denes a new type, SM, to be a computation that implicitly carries a type S. That
is, a computation of type SM t denes a value of type t while also interacting with (reading and
writing) the state of type S. The denition of SM is simple: it consists of functions that take a
state and produce two results: a returned value (of any type) and an updated state. We can't use
a type synonym here: we need a type name like SM that can be used in instance declarations. The
newtype declaration is often used here instead of data.
This instance declaration denes the `plumbing' of the monad: how to sequence two computations and the denition of an empty computation. Sequencing (the >>= operator) denes denition,
9.3 Using Monads
47
operators such as putChar are primitive since they deal with the inner workings of the IO monad.
Similarly, our state monad uses two primitives: readSM and updateSM. Note that these depend on
the inner structure of the monad - a change to the denition of the SM type would require a change
to these primitives.
The den).
Finally, we need a function that runs computations in the monad, runSM. This takes an initial
state and a computation and yields both the returned value of the computation and the nal state.
Looking at the bigger picture, what we are trying to do is dene dierently depending
on whether or not they use S.
Rather than present any examples using this simple state monad, we proceed on to a more
complex example that includes the state monad. We dene a small embedded language of resourceusing calculations. That is, we build a special purpose language implemented as a set of Haskell
types and functions. Such languages use the basic tools of Haskell, functions and types, to build a
library of operations and types specically denition denition
reads as follows: to combine two `resourceful' computations, c1 and fc2 (a function producing c2),
pass the initial resources into c1. The result will be either
a value, v, and remaining resources, which are used to determine the next computation (the
call fc2 v), or
a suspended computation, pc1, and resources remaining at the point of suspension.
48
9 ABOUT MONADS
The suspension must take the second computation into consideration: pc1 suspends only the rst
computation, c1, so we must bind c2 to this to produce a suspension of the overall computation.
The denition of return leaves the resources unchanged while moving v into the monad.
This instance declaration denes the basic structure of the monad but does not determine how
resources are used. This monad could be used to control many types of resource or implement many
dierent types of resource usage policies. We will demonstrate a very simple denition of resources
as an example: we choose Resource to be an Integer, representing available computation steps:
type Resource
= Integer
This function takes a step unless no steps are available:
step
step v
:: a -> R dene a sequence of \resourceful" computations (the monad) and
we can express a form of resource usage using step. Finally, we need to address how computations
in this monad are expressed.
Consider an increment function in our monad:
inc
inc i
:: R Integer -> R Integer
= do iValue <- i
step (iValue+1)
This denes increment as a single step of computation. The <- is necessary to pull the argument
value out of the monad; the type of iValue is Integer instead of R Integer.
This denition isn't particularly satisfying, though, compared to the standard denition of the
increment function. Can we instead \dress up" existing operations like + so that they work in our
monadic world? We'll start with a set of lifting functions. These bring existing functionality
into the monad. Consider the denition of lift1 (this is slightly dierent from the liftM1 found
in the Monad library):
lift1
lift1 f
:: (a -> b) -> (R a -> R b)
= \ra1 -> do a1 <- ra1
step (f a1)
This takes a function of a single argument, f, and creates a function in R that executes the lifted
function in a single step. Using lift1, inc becomes
inc
inc i
:: R Integer -> R Integer
= lift1 (i+1)
This is better but still not ideal. First, we add lift2:
49
9.3 Using Monads
lift2
lift2 f
:: (a -> b -> c) -> (R a -> R b -> R c)
= \ra1 ra2 -> do a1 <- ra1
a2 <- ra2
step (f a1 a2)
Notice that this function explicitly sets the order of evaluation in the lifted function: the computation yielding a1 occurs before the computation for a2.
Using lift2, . fromInteger
The fromInteger function is applied implicitly to all integer constants in a Haskell program (see
Section 10.3); this denition allows integer constants to have the type R Integer. We can now,
n. The idea
of providing new denitions nishing in the allotted
number of steps. We can now compute
run 10 (fact 2)
run 10 (fact 20)
)
)
Just 2
Nothing
Finally, we can add some more interesting functionality to this monad. Consider the following
function:
(|||)
:: R a -> R a -> R a
This runs two computations in parallel, returning the value of the rst one to complete. One
possible denition denition
of oneStep is simple: it gives c1 a 1 as its resource argument. If a nal value is reached, this is
returned, adjusting the returned step count (it is possible that a computation might return after
taking no steps so the returned resource count isn't necessarily 0). If the computation suspends, a
patched up resource count is passed to the nal continuation.
We can now evaluate expressions like run 100 (fact (-1) ||| (fact 3)) without looping
since the two calculations are interleaved. (Our denition dening the basic semantics of a system. We also present
this example as a model of a small Domain Specic Language, something Haskell is particularly
good at dening. Many other DSLs have been developed in Haskell; see haskell.org for many
more examples. Of particular interest are Fran, a language of reactive animations, and Haskore, a
language of computer music.
51
10
Numbers
Haskell provides a rich collection of numeric types, based on those of Scheme [7], which in turn
are based on Common Lisp [8]. (Those languages, however, are dynamically typed.) The standard
types include xed- and arbitrary-precision integers, ratios (rational numbers) formed from each
integer type, and single- and double-precision real and complex oating-point. We outline here the
basic characteristics of the numeric type class structure and refer the reader to x6.4 for details.
10.1 Numeric Class Structure
The numeric type classes (class Num and those that lie below it) account for many of the standard
Haskell classes. We also note that Num is a subclass of Eq, but not of Ord; this is because the order
predicates do not apply to complex numbers. The subclass Real of Num, however, is a subclass of
Ord as well.
The Num class provides several basic operations common to all numeric types; these include,
among others, addition, subtraction, negation, multiplication, and absolute value:
(+), (-), (*)
negate, abs
:: (Num a) => a -> a -> a
:: (Num a) => a -> a
[negate is the function applied by Haskell's only prex operator, minus; we can't call it (-), because
that is the subtraction function, so this name is provided instead. For example, -x*y is equivalent
to negate (x*y). (Prex minus has the same syntactic precedence as inx minus, which, of course,
is lower than that of multiplication.)]
Note that Num does not provide a division operator; two dierent kinds of division operators are
provided in two non-overlapping subclasses of Num:
The class Integral provides whole-number division and remainder operations. The standard
instances of Integral are Integer (unbounded or mathematical integers, also known as \bignums")
and Int (bounded, machine integers, with a range equivalent to at least 29-bit signed binary). A
particular Haskell implementation might provide other integral types in addition to these. Note
that Integral is a subclass of Real, rather than of Num directly; this means that there is no attempt
to provide Gaussian integers.
All other numeric types fall in the class Fractional, which provides the ordinary division
operator (/). The further subclass Floating contains trigonometric, logarithmic, and exponential
functions.
The RealFrac subclass of Fractional and Real provides a function properFraction, which
decomposes a number into its whole and fractional parts, and a collection of functions that round
to integral values by diering rules:
properFraction
truncate, round,
floor, ceiling:
:: (Fractional a, Integral b) => a -> (b,a)
:: (Fractional a, Integral b) => a -> b
52
10 NUMBERS
The RealFloat subclass of Floating and RealFrac provides some specialized functions for
eÆcient access to the components of a oating-point number, the exponent and signicand. The
standard types Float and Double fall in class RealFloat.
10.2 Constructed Numbers
Of the standard numeric types, Int, Integer, Float, and Double are primitive. The others are
made from these by type constructors.
Complex (found in the library Complex) is a type constructor that makes a complex type in
class Floating from a RealFloat type:
data (RealFloat a) => Complex a = !a :+ !a deriving (Eq, Text)
The ! symbols are strictness
ags; these were discussed in Section 6.3. Notice the context
RealFloat a, which restricts the argument type; thus, the standard complex types are Complex Float
and Complex Double. We can also see from the data declaration that a complex number is written
x :+ y ; the arguments are the cartesian real and imaginary parts, respectively. Since :+ is a data
constructor, we can use it in pattern matching:
conjugate
conjugate (x:+y)
:: (RealFloat a) => Complex a -> Complex a
= x :+ (-y)
Similarly, the type constructor Ratio (found in the Rational library) makes a rational type in
class RealFrac from an instance of Integral. (Rational is a type synonym for Ratio Integer.)
Ratio, however, is an abstract type constructor. Instead of a data constructor like :+, rationals
use the `%' function to form a ratio from two integers. Instead of pattern matching, component
extraction functions are provided:
(%)
:: (Integral a) => a -> a -> Ratio a
numerator, denominator :: (Integral a) => Ratio a -> a
Why the dierence? Complex numbers in cartesian form are unique|there are no nontrivial
identities involving :+. On the other hand, ratios are not unique, but have a canonical (reduced)
form that the implementation of the abstract data type must maintain; it is not necessarily the
case, for instance, that numerator (x%y) is equal to x, although the real part of x:+y is always x.
10.3 Numeric Coercions and Overloaded Literals
The Standard Prelude and libraries provide several overloaded functions that serve as explicit
coercions:
53
10.4 Default Numeric Types
fromInteger
fromRational
toInteger
toRational
fromIntegral
fromRealFrac
::
::
::
::
::
::
(Num a) => Integer -> a
(Fractional a) => Rational -> a
(Integral a) => a -> Integer
(RealFrac a) => a -> Rational
(Integral a, Num b) => a -> b
(RealFrac a, Fractional b) => a -> b
fromIntegral
fromRealFrac
= fromInteger . toInteger
= fromRational . toRational
Two of these are implicitly used to provide overloaded numeric literals: An integer numeral
(without a decimal point) is actually equivalent to an application of fromInteger to the value of
the numeral as an Integer. Similarly, a oating numeral (with a decimal point) is regarded as
an application of fromRational to the value of the numeral as a Rational. Thus, 7 has the type
(Num a) => a, and 7.3 has the type (Fractional a) => a. This means that we can use numeric
literals in generic numeric functions, for example:
halve
halve x
:: (Fractional a) => a -> a
= x * 0.5
This rather indirect way of overloading numerals has the additional advantage that the method of
interpreting a numeral as a number of a given type can be specied in an Integral or Fractional
instance declaration (since fromInteger and fromRational are operators of those classes, respectively). For example, the Num instance of (RealFloat a) => Complex a contains this method:
fromInteger x
= fromInteger x :+ 0
This says that a Complex instance of fromInteger is dened to produce a complex number whose
real part is supplied by an appropriate RealFloat instance of fromInteger. In this manner, even
user-dened numeric types (say, quaternions) can make use of overloaded numerals.
As another example, recall our rst denition of inc from Section 2:
inc
inc n
:: Integer -> Integer
= n+1
Ignoring the type signature, the most general type of inc is (Num a) => a->a. The explicit type
signature is legal, however, since it is more specic than the principal type (a more general type
signature would cause a static error). The type signature has the eect of restricting inc's type,
and in this case would cause something like inc (1::Float) to be ill-typed.
10.4 Default Numeric Types
Consider the following function denition:
rms
rms x y
:: (Floating a) => a -> a -> a
= sqrt ((x^2 + y^2) * 0.5)
The exponentiation function (^) (one of three dierent standard exponentiation operators with
dierent typings, see x6.8.5) has the type (Num a, Integral b) => a -> b -> a, and since 2 has
the type (Num a) => a, the type of x^2 is (Num a, Integral b) => a. This is a problem; there
54
11 MODULES
is no way to resolve the overloading associated with the type variable b, since it is in the context,
but has otherwise vanished from the type expression. Essentially, the programmer has specied
that x should be squared, but has not specied whether it should be squared with an Int or an
Integer value of two. Of course, we can x this:
rms x y
=
sqrt ((x ^ (2::Integer) + y ^ (2::Integer)) * 0.5)
It's obvious that this sort of thing will soon grow tiresome, however.
In fact, this kind of overloading ambiguity is not restricted to numbers:
show (read "xyz")
As what type is the string supposed to be read? This is more serious than the exponentiation
ambiguity, because there, any Integral instance will do, whereas here, very dierent behavior can
be expected depending on what instance of Text is used to resolve the ambiguity.
Because of the dierence between the numeric and general cases of the overloading ambiguity
problem, Haskell provides a solution that is restricted to numbers: Each module may contain
a default declaration, consisting of the keyword default followed by a parenthesized, commaseparated list of numeric monotypes (types with no variables). When an ambiguous type variable
is discovered (such as b, above), if at least one of its classes is numeric and all of its classes are
standard, the default list is consulted, and the rst type from the list that will satisfy the context
of the type variable is used. For example, if the default declaration default (Int, Float) is in
eect, the ambiguous exponent above will be resolved as type Int. (See x4.3.4 for more details.)
The \default default" is (Integer, Double), but (Integer, Rational, Double) may also be
appropriate. Very cautious programmers may prefer default (), which provides no defaults.
11
Modules
A Haskell program consists of a collection of modules. A module in Haskell serves the dual purpose
of controlling name-spaces and creating abstract data types.
The top level of a module contains any of the various declarations we have discussed: xity
declarations, data and type declarations, class and instance declarations, type signatures, function
denitions, and pattern bindings. Except for the fact that import declarations (to be described
shortly) must appear rst, the declarations may appear in any order (the top-level scope is mutually
recursive).
Haskell's module design is relatively conservative: the name-space of modules is completely at,
and modules are in no way \rst-class." Module names are alphanumeric and must begin with an
uppercase letter. There is no formal connection between a Haskell module and the le system that
would (typically) support it. In particular, there is no connection between module names and le
names, and more than one module could conceivably reside in a single le (one module may even
span several les). Of course, a particular implementation will most likely adopt conventions that
make the connection between modules and les more stringent.
Technically speaking, a module is really just one big declaration which begins with the keyword
module; here's an example for a module whose name is Tree:
55
11.1 Qualied Names
module Tree ( Tree(Leaf,Branch), fringe ) where
data Tree a
= Leaf a | Branch (Tree a) (Tree a)
fringe :: Tree a -> [a]
fringe (Leaf x)
= [x]
fringe (Branch left right) = fringe left ++ fringe right
The type Tree and the function fringe should be familiar; they were given as examples in Section
2.2.1. [Because of the where keyword, layout is active at the top level of a module, and thus the
declarations must all line up in the same column (typically the rst). eect.
11.1 Qualied Names
There is an obvious problem with importing names directly into the namespace of module. What if
two imported modules contain dierent entities with the same name? Haskell solves this problem
using qualied names . An import declaration may use the qualified keyword to cause the imported names to be prexed by the name of the module imported. These prexes are followed by
the `.' character without intervening whitespace. [Qualiers are part of the lexical syntax. Thus,
A.x and A . x are quite dierent: the rst is a qualied name and the second a use of the inx `.'
function.] For example, using the Tree module introduced above:
56
11 MODULES qualiers for all imported entities, making the source
of each name explicit with every use. Others prefer short names and only use qualiers when
absolutely necessary.
Qualiers are used to resolve con icts between dierent entities which have the same name. But
what if the same entity is imported from more than one module? Fortunately, such name clashes
are allowed: an entity can be imported by various routes without con ict. The compiler knows
whether entities from dierent modules are actually the same.
11.2 Abstract Data Types
leaf
branch
cell
left, right
isLeaf
-::
::
::
::
::
just
a ->
Tree
Tree
Tree
Tree
the type name
Tree a
a -> Tree a -> Tree a
a -> a
a -> Tree a
a -> Bool
A module supporting this is:
module TreeADT (Tree, leaf, branch, cell,
left, right, isLeaf) where
data Tree a
= Leaf a | Branch (Tree a) (Tree a)
leaf
branch
cell (Leaf a)
left (Branch l r)
right (Branch l r)
isLeaf (Leaf _)
isLeaf _
=
=
=
=
=
=
=
Leaf
Branch
a
l
r
True
False
11.3 More Features
57 aecting users of
the type.
11.3 More Features
Here is a brief overview of some other aspects of the module system. See the report for more details.
An import declaration may selectively hide entities using a hiding clause in the import
declaration. This is useful for explicitly excluding names that are used for other purposes
without having to use qualiers for other imported names from the module.
An import may contain an as clause to specify a dierent qualier than the name of the
importing module. This can be used to shorten qualiers from modules with long names or
to easily adapt to a change in module name without changing all qualiers.
Programs implicitly import the Prelude module. An explicit import of the Prelude overrides
the implicit import of all Prelude names. Thus,
import Prelude hiding length
will not import length from the Standard Prelude, allowing the name length to be dened
dierently.
Instance declarations are not explicitly named in import or export lists. Every module exports
all of its instance declarations and every import brings all instance declarations into scope.
Class methods may be named either in the manner of data constructors, in parentheses
following the class name, or as ordinary variables.
Although Haskell's module system is relatively conservative, there are many rules concerning the
import and export of values. Most of these are obvious|for instance, it is illegal to import two
dierent entities having the same name into the same scope. Other rules are not so obvious|for
example, for a given type and class, there cannot be more than one instance declaration for that
combination of type and class anywhere in the program. The reader should read the Report for
details (x5).
12
Typing Pitfalls
This short section give an intuitive description of a few common problems that novices run into
using Haskell's type system.
58
12 TYPING PITFALLS
12.1 Let-Bound Polymorphism
Any language using the Hindley-Milner type system has what is called let-bound polymorphism,
because identiers not bound using a let or where clause (or at the top level of a module) are
limited with respect to their polymorphism. In particular, a lambda-bound function (i.e., one passed
as argument to another function) cannot be instantiated in two dierent ways. For example, this
program is illegal:
let f g = (g [], g 'a')
in f (\x->x)
-- ill-typed expression
because g, bound to a lambda abstraction whose principal type is a->a, is used within f in two
dierent ways: once with type [a]->[a], and once with type Char->Char.
12.2 Numeric Overloading
average xs
:: (Fractional a) => [a] -> a
= sum xs / fromIntegral (length xs)
12.3 The Monomorphism Restriction (x4.5.5). A simpler
explanation follows:
The monomorphism restriction says that any identier bound by a pattern binding (which includes bindings to a single identier), and having no explicit type signature, must be monomorphic.
An identier is monomorphic if is either not overloaded, or is overloaded but is used in at most one
specic dened in a higher-order manner,
as in this denition of sum from the Standard Prelude:
sum
= foldl (+) 0
59
As is, this would cause a static type error. We can x the problem by adding the type signature:
sum
:: (Num a) => [a] -> a
Also note that this problem would not have arisen if we had written:
sum xs
= foldl (+) 0 xs
because the restriction only applies to pattern bindings.
13
Arrays
Ideally, arrays in a functional language would be regarded simply as functions from indices to values,
but pragmatically, in order to assure eÆcient access to array elements, we need to be sure we can
take advantage of the special properties of the domains of these functions, which are isomorphic
to nite contiguous subsets of the integers. Haskell, therefore, does not treat arrays as general
functions with an application operation, but as abstract data types with a subscript operation.
Two main approaches to functional arrays may be discerned: incremental and monolithic definition. In the incremental case, we have a function that produces an empty array of a given size
and another that takes an array, an index, and a value, producing a new array that diers from
the old one only at the given index. Obviously, a naive implementation of such an array semantics
would be intolerably ineÆcient, either requiring a new copy of an array for each incremental redenition,.
13.1 Index types
The Ix library denes a type class of array indices:
class (Ord a) => Ix a where
range
:: (a,a) -> [a]
index
:: (a,a) a -> Int
inRange
:: (a,a) -> a -> Bool
Instance declarations are provided for Int, Integer, Char, Bool, and tuples of Ix types up to length
5; in addition, instances may be automatically derived for enumerated and tuple types. We regard
the primitive types as vector indices, and tuples as indices of multidimensional rectangular arrays.
Note that the rst argument of each of the operations of class Ix is a pair of indices; these are
typically the bounds (rst and last indices) of an array. For example, the bounds of a 10-element,
zero-origin vector with Int indices would be (0,9), while a 100 by 100 1-origin matrix might have
the bounds ((1,1),(100,100)). (In many other languages, such bounds would be written in a
60
13 ARRAYS
form like 1:100, 1:100, but the present form ts the type system better, since each bound is of
the same type as a general index.)
The range operation takes a bounds pair and produces the list of indices lying between those
bounds, in index order. For example,
range (0,4)
range ((0,0),(1,2))
)
)
[0,1,2,3,4]
[
13.2 Array Creation
Haskell's monolithic array creation function forms an array from a pair of bounds and a list of
index-value pairs (an association list):
array
:: (Ix a) => (a,a) -> [(a,b)] -> Array a b
Here, for example, is a denition of an array of the squares of numbers from 1 to 100:
squares
= array (1,100) [(i, i*i) | i <- [1..100]]
This array expression is typical in using a list comprehension for the association list; in fact, this
usage results in array expressions much like the array comprehensions of the language Id [6].
Array subscripting is performed with the inx operator !, and the bounds of an array can be
extracted with the function bounds:
squares!7
)
49
bounds squares
)
(1,100)
We might generalize this example by parameterizing the bounds and the function to be applied to
each index:
mkArray
mkArray f bnds
:: (Ix a) => (a -> b) -> (a,a) -> Array a b
= array bnds [(i, f i) | i <- range bnds]
Thus, we could dene squares as mkArray (\i -> i * i) (1,100).
Many arrays are dened]])
61
13.3 Accumulation
Another example of such a recurrence is the n by n wavefront matrix, in which elements of the
rst row and rst column all have the value 1 and other elements are sums of their neighbors to
the west, northwest, and north:
wavefront
wavefront n
:: Int -> Array (Int,Int) rst row and column in parallel and proceed as a wedge-shaped
wave, traveling from northwest to southeast. It is important to note, however, that no order of
computation is specied by the association list.
In each of our examples so far, we have given a unique association for each index of the array
and only for the indices within the bounds of the array, and indeed, we must do this in general
for an array be fully dened. An association with an out-of-bounds index results in an error; if an
index is missing or appears more than once, however, there is no immediate error, but the value of
the array at that index is then undened, so that subscripting the array with such an index yields
an error.
13.3 Accumulation
We can relax the restriction that an index appear at most once in the association list by specifying
how to combine multiple values associated with a single index; the result is called an accumulated
array:
accumArray :: (Ix a) -> (b -> c -> b) -> b -> (a,a) -> [Assoc a c] -> Array a b
The rst argument of accumArray is the accumulating function, the second is an initial value (the
same for each element of the array), and the remaining arguments are bounds and an association
list, as with the array function. Typically, the accumulating function is (+), and the initial value,
zero; for example, this function takes a pair of bounds and a list of values (of an index type) and
yields a histogram; that is, a table of the number of occurrences of each value within the bounds:
hist
hist bnds is
:: (Ix a, Integral b) => (a,a) -> [a] -> Array a b
= accumArray (+) 0 bnds [(i, 1) | i <- is, inRange bnds i]
Suppose we have a collection of measurements on the interval [a ; b ), and we want to divide the
interval into decades and count the number of measurements within each:
decades
decades a b
:: (RealFrac a) => a -> a -> [a] -> Array Int Int
= hist (0,9) . map decade
where decade x = floor ((x - a) * s)
s
= 10 / (b - a)
62
13 ARRAYS
13.4 Incremental updates
In addition to the monolithic array creation functions, Haskell also has an incremental array update
function, written as the inx operator //; the simplest case, an array a with element i updated to
v, is written a // [(i, v)]. The reason for the square brackets is that the left argument of (//)
is an association list, usually containing a proper subset of the indices of the array:
(//)
:: (Ix a) => Array a b -> [(a,b)] -> Array a b
As with the array function, the indices in the association list must be unique for the values to be
dened. For example, here is a function to interchange two rows of a matrix:
swapRows :: (Ix a, Ix b, Enum b) => a -> a -> Array (a,b) c -> Array (a,b) c
swapRows i i' a = a // ([((i ,j), a!(i',j)) | j <- [jLo..jHi]] ++
[((i',j), a!(i ,j)) | j <- [jLo..jHi]])
where ((iLo,jLo),(iHi,jHi)) = bounds a
The concatenation here of two separate list comprehensions over the same list of j indices is,
however, a slight ineÆciency; it's like writing two loops where one will do in an imperative language.
Never fear, we can perform the equivalent of a loop fusion optimization in Haskell:
swapRows i i' a = a // [assoc | j <- [jLo..jHi],
assoc <- [((i ,j), a!(i',j)),
((i',j), a!(i, j))] ]
where ((iLo,jLo),(iHi,jHi)) = bounds a
13.5 An example: Matrix Multiplication
We complete our introduction to Haskell arrays with the familiar example of matrix multiplication,
taking advantage of overloading to dene) *"
13.5 An example: Matrix Multiplication
63
As an aside, we can also dene rst of these, the arguments are numeric matrices, and the (i ; j )-th element of the result
is the maximum dierence rst argument and
j -th column of the second are equal as vectors.
Notice that the element types of genMatMult need not be the same, but merely appropriate
for the function parameter star. We could generalize still further by dropping the requirement
that the rst column index and second row index types be the same; clearly, two matrices could
be considered conformable as long as the lengths of the columns of the rst and the rows of the
second are equal. The reader may wish to derive this still more general version. (Hint: Use the
index operation to determine the lengths.)
64
14
REFERENCES
The Next Stage
A large collection of Haskell resources is available on the web at haskell.org. Here you will nd
compilers, demos, papers, and much valuable information about Haskell and functional programming. Haskell compilers or interpreters run on almost all hardware and operating systems. The
Hugs system is both small and portable { it is an excellent vehicle for learning Haskell.
15
Acknowledgements.
References
[1] R. Bird. Introduction to Functional Programming using Haskell. Prentice Hall, New York, 1998.
[2] A.Davie. Introduction to Functional Programming System Using Haskell. Cambridge University
Press, 1992.
[3] P. Hudak. Conception, evolution, and application of functional programming languages. ACM
Computing Surveys, 21(3):359{411, 1989.
[4] Simon Peyton Jones (editor). Report on the Programming Language Haskell 98, A Non-strict
Purely Functional Language. Yale University, Department of Computer Science Tech Report
YALEU/DCS/RR-1106, Feb 1999.
[5] Simon Peyton Jones (editor) The Haskell 98 Library Report. Yale University, Department of
Computer Science Tech Report YALEU/DCS/RR-1105, Feb 1999.
[6] R.S. Nikhil. Id (version 90.0) reference manual. Technical report, Massachusetts Institute of
Technology, Laboratory for Computer Science, September 1990.
[7] J. Rees and W. Clinger (eds.). The revised3 report on the algorithmic language Scheme. SIGPLAN Notices, 21(12):37{79, December 1986.
[8] G.L. Steele Jr. Common Lisp: The Language. Digital Press, Burlington, Mass., 1984.
[9] P. Wadler. How to replace failure by a list of successes. In Proceedings of Conference on
Functional Programming Languages and Computer Architecture, LNCS Vol. 201, pages 113{
128. Springer Verlag, 1985.
[10] P. Wadler. Monads for Functional Programming In Advanced Functional Programming ,
Springer Verlag, LNCS 925, 1995. | https://pt.scribd.com/doc/67153439/haskell-98-tutorial | CC-MAIN-2017-09 | refinedweb | 16,058 | 58.92 |
So you have build a great web application using laravel. Now you wan to create a Dynamic XML Sitemap for better SEO, which is great. Google says that Using a sitemap doesn't guarantee that all the items in our sitemap will be crawled and indexed, as Google processes rely on complex algorithms to schedule crawling. However, in most cases, our site will benefit from having a sitemap, and we will not be penalized for having one. So why not create a sitemap using the controllers, views and routes in our awesome laravel app and make Google, Bing and other search engine robots happy.
According to google, Here are some of the reasons why you should use sitemap:
- Your website is really large where google web crawlers might overlook crawling some of your recently updated pages.
- Your website has really large archive where pages are not well linked to each others.
- Your website is new and has few external links as a result Google might not discover your pages if no other sites link to them.
- Your site uses rich media content, is shown in Google News, or uses other sitemaps-compatible annotations.
You can read more about it in google webmaster support page.
What is sitemap made of?
It is basically made of individual
<url> for each pages of your site. Inside each
<loc> it points to a location of the file that it includes the
<url> of. Also there are optional fields such as last modification date
<lastmod>, change frequency
<changefreq>, priority
<priority>etc. You can read more about sitemap protocal in their official page.
Have a look at the example sitemap below:
<?xml version="1.0" encoding="UTF-8"?> <urlset xmlns=""> <url> <loc></loc> <lastmod>2005-01-01</lastmod> <changefreq>monthly</changefreq> <priority>0.8</priority> </url> </urlset>
The benefits of having more than one sitemap
Let's say we have database tables for Articles, Categories, Questions and Tags. We can create seperate xml sitemaps for each of them which will be easily managable and also be very clear to read for both humans and search engine robots. Then we will include all those 4 sitemaps into one index file and submit to google, bing or wherever we may please.
Getting Started
We will be creating 4 different sitemaps for 4 different database tables and include all of them in one main sitemap index. It is a requirment to have a main sitemap index if we are going to have multiple sitemaps. Each sitemap can hold around 50000 urls.
Creating the sitemap controller
Let's create a new controller for our sitemaps.
php artisan make:controller SitemapController
Creating the sitemap index
Now your newly created sitemap controller must look something like this:
<?php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Http\Requests; class SitemapController extends Controller { // }
Let's create a index method inside
SitemapController class that will generate all the xml sitemaps we need. We will create 4 sitemapes for 4 of our database tables which are Articles, Categories, Questions and Tags. All of them will be included in one single sitemap index.
public function index() { $articles = Article::all()->first(); $categories = Category::all()->first(); $questions = Question::all()->first(); $tags = Tag::all()->first(); return response()->view('sitemap.index', [ 'articles' => $articles, 'categories' => $categories, 'questions' => $questions, 'tags' => $tags, ])->header('Content-Type', 'text/xml'); }
Here we returned a response object to the view, and set the
text/xml header to make sure the header is available to the view, we have includeed the response first.
Make sure you have called the models on top of your class like this:
use App\Article; use App\Category; use App\Question; use App\Tag;
Creating the sitemap view
Go ahead and create sitemap folder in your laravel application resources/views/sitemap and create a file
index.blade.php. We are going to wrap 4 of our
<loc>. This is how our index page must look like:
Replace
project.app:8000with your website name such as
mywebsite.com
<?php echo '<>
Creating the dynamic url's for the sitemap
Head over to
SitemapController class and create more methods for each of the Database Tables that we want to generate url's to include in the sitemap. Here I have created methods for articles, categories, questions and tags.
public function articles() { $articles = Article::latest()->get(); return response()->view('sitemap.articles', [ 'articles' => $articles, ])->header('Content-Type', 'text/xml'); } public function categories() { $categories = Category::all(); return response()->view('sitemap.categories', [ 'categories' => $categories, ])->header('Content-Type', 'text/xml'); } public function questions() { $questions = Question::latest()->get(); return response()->view('sitemap.questions', [ 'questions' => $questions, ])->header('Content-Type', 'text/xml'); } public function tags() { $tags = Tag::all(); return response()->view('sitemap.tags', [ 'tags' => $tags, ])->header('Content-Type', 'text/xml'); }
Now that we have done pretty good job with controllers, lets head over to resources/views/sitemap and create views for each articles, categories, questions and tags.
Below is how my
articles.blade.php looks like. I have put my
changfreq as weekly which can be changed to daily, monthly or yearlytoo. You can also set the
priority which ranges from 0.1 to 1.0. you can read more about it in sitemaps protocal page.
Please be aware that setting the priority or change frequency high or low is done from your side only. It is upto search engine robots to do the honour.
<?php echo '<?xml version="1.0" encoding="UTF-8"?>'; ?> <urlset xmlns=""> @foreach ($articles as $article) <url> <loc>{{ $article->slug }}</loc> <lastmod>{{ $article->created_at->tz('UTC')->toAtomString() }}</lastmod> <changefreq>weekly</changefreq> <priority>0.9</priority> </url> @endforeach </urlset>
Now repeat the same process for categories, questions and tags.
Please keep in mind that
<lastmod>,
<changefreq>and
<priority>tags are optional. So feel free to avoid them if you like.
Creating routes for the sitemap
Open up your
routes.php file and add the new routes for sitemaps.
Route::get('/sitemap.xml', '[email protected]'); Route::get('/sitemap.xml/articles', '[email protected]'); Route::get('/sitemap.xml/categories', '[email protected]'); Route::get('/sitemap.xml/questions', '[email protected]'); Route::get('/sitemap.xml/tags', '[email protected]');
Quick note: If you encounter any error loading your page view, make sure to place these sitemaps routes to the very end of all your routes list at
routes.phpor vice versa. Sometimes they conflict with each other.
Now we have controllers and views ready for dynamic sitemap. The reason it is dynamic is because everytime you create a new article, categories, questions or tags. We don't need to add any url's to sitemap manuallly, It will be automatically included in sitemap because we are using controllers to do this excellent job.
Now I can go to to see the list of sitemaps that I created. If you go to your
website.com/sitemap.xml, You should also see the list of sitemaps you created like the screenshot below.
You need to submit only one url
website.com/sitemap.xml to google. You can also go to individual sitemaps and see for yourself the list of articles links that is generated dynamically.
Articles sitemap screenshot
Conclusion
In this article, We learned how to use controllers, views and routes and do things the way we usually do in laravel to create a dynamic xml sitemap. The best part of it is that we have created individual sitemaps for each of our database tables and inform search engines about our url's that will synchronize dynamically.
All this without using any third party packages. This is great! I hope this article was helpful for you. Leave your comments below for further discussion. | https://kaloraat.com/articles/create-a-dynamic-xml-sitemap-in-laravel | CC-MAIN-2020-05 | refinedweb | 1,265 | 56.66 |
This is something I've been playing with over the last few days. When I wrote my blog engine, I wasn't crazy about database optimization yet [1]. I had a look recently at the queries generated to render the home page and I noticed that I had a too many extra queries for each articles.
The Problem
Tagging
In the same way, when you register a model with django-tagging, the tags are fetched only when they are accessed. For example:
{% with entry.tags as tags %} {% if tags %}Tags: {% for tag in tags %} <a href="{% url blog:tag tag %}">{{ tag }}</a> {% endfor %} {% endif %} {% endwith %}
Again, this code executed for every listed entry will result in O(N) queries. Even worse, if I remove the {% with %} statement, the tags are fetched twice: once for the {% if tags %} check, and once for the {% for %} loop.
The solution
Tags and comments share the particularity not to change often (at least on my blog). And they're only attached to blog entries. So the idea is to add two columns and use them to cache the tags and the comment count in a persistent way. The key to keep the cached values in sync with your real data is to use Django signals.
Let's add some fields to our “Blog Entry” model. The tags will be stored as a string and separated by a comma, so we need a method to get the list of tags as a proper list.
class Entry(models.Model): # Your fields here... comment_count = models.PositiveIntegerField(_('Comment count'), default=0) cached_tags = models.CharField(_('Tags'), max_length=1023, blank=True) def get_cached_tags(self): if self.cached_tags: return self.cached_tags.split(',') return None
Then we need a function to update our comment count. This is a receiver function which we will connect to django signals.
def update_comment_count(sender, instance, created, **kwargs): count = Comment.objects.filter(site=settings.SITE_ID, object_pk=instance.object_pk, content_type=instance.content_type, is_public=True, is_removed=False).count() Entry.objects.filter(pk=instance.object_pk).update(comment_count=count)
Another function to update the cached tags:
def update_tags(sender, instance, created, **kwargs): ctype = ContentType.objects.get(app_label='blog', model='entry') tags = tagging.models.TaggedItem.objects.filter(content_type=ctype, object_id=instance.object.id).select_related().values_list('tag__name', flat=True) instance.object.cached_tags = ','.join(tags) instance.object.save()
To keep the cached attributes in sync with our data, we need to register the functions to update the values each time a comment is posted and each time an object is tagged. This is done by connecting the receiver functions to a post_save signal:
models.signals.post_save.connect(update_comment_count, sender=Comment) models.signals.post_save.connect(update_tags, sender=TaggedItem)
Now, you can add tags and comments to your blog entries and the cached values will be automatically updated. Note however that if you need the values to be updated when a comment or a tag is deleted, you need to register the receiver functions to the post_delete signal (and get rid of the created argument). Also, a QuerySet.update() query doesn't send any signal at all so you may want to update the cache manually after doing such a query.
That's it! Down from 30 to 3 queries, from O(N) to O(1).
I use contrib.comments and its set of template tags to generate the forms, render the comments and display the comment count for each blog entry on the homepage. For example:
This code issues a COUNT query. It's no big deal when there is a single entry but the homepage loops over a list of entries and the result is O(N) queries. | http://bruno.im/2010/jun/27/django-signals-consistent-caching/ | CC-MAIN-2019-18 | refinedweb | 605 | 56.86 |
Data Science interview questions and answers for 2021 on topics ranging from probability, statistics, data science – to help crack data science job interviews.
Hone yourself to be the ideal candidate at your next data scientist job interview with these frequently asked data science interview questions. Data Scientist interview questions asked at a job interview can fall into one of the following categories -
Data Science Technical Interview Questions based on data science programming languages like Python, R, etc.
Data Science Technical Interview Questions based on statistics, probability, math, machine learning, etc.
Practical experience or Role-based data scientist interview questions based on the projects you have worked on and how they turned out.
Apart from interview questions, we have also put together a collection of 100+ ready-to-use Data Science solved code examples. Each code example solves a specific use case for your project. These can be of great help in answering interview questions and also a handy guide when working on data science projects.
In collaboration with data scientists, industry experts, and top counsellors, we have put together a list of general data science interview questions and answers to help you prepare for applying for data science jobs. This first part of a series of data science interview questions and answers articles focuses only on common topics like data, probability, statistics, and other data science concepts. This blog also includes a list of open-ended questions that interviewers ask to get a rough idea of how often and quickly you can think on your feet. Some data analyst interview questions in this blog can also be asked in a data science interview. These kinds of analytics interview questions are asked to measure if you were successful in applying data science techniques to real-life problems.
Ace Your Next Job Interview with Mock Interviews from Experts to Improve Your Skills and Boost Confidence!.
Download 100 Data Science Interview Questions and Answers PDF
1. What is Machine Learning?
Machine Learning comprises two words-machine and learning, which hint towards its definition - a subdomain in computer science that deals with the application of mathematical algorithms to identify the trend or pattern in a dataset.
The simplest example is the usage of linear regression (y=mt+c) to predict the output of a variable y as a function of time. The machine learning model learns the trends in the dataset by fitting the equation on the dataset and evaluating the best set of values for m and c. One can then use these equations to predict future values.
Access 100+ ready-to-use, sample Python and R codes for data science to prepare for your Data Science Interview
2. Quickly differentiate between Machine Learning, Data Science, and AI.
3. Out of Python and R, which is your preference for performing text analysis?
Python is likely to be everyone’s choice for text analysis as it has libraries like Natural Language Toolkit (NLTK), Gensim. CoreNLP, SpaCy, TextBlob, etc. are useful for text analysis.
4. What are Recommender Systems?
Understanding consumer behavior is often the primary goal of many businesses. For example, consider the case of Amazon. If a user searches for a product category on its website, the major challenge for Amazon’s backend algorithms is to come up with suggestions that are likely to motivate the users to make a purchase. And such algorithms are the heart of recommendation systems or recommender systems. These systems aim at analyzing customer behavior and evaluating their fondness for different products. Apart from Amazon, recommender systems are also used by Netflix, Youtube, Flipkart, etc.
5. Why data cleaning plays a vital role in the analysis? (Access popular Python and R Codes for data cleaning )It is cumbersome to clean data from multiple sources to transform it into a format that data analysts or scientists can work with. As the number of data sources increases, the time it takes to clean the data increases exponentially due to the number of sources and the volume of data generated in these sources. It might take up to 80% of the time for cleaning data, thus making it a critical part of the analysis task.
6. Define Collaborative filtering.
The process of filtering is used by most recommender systems to identify patterns or information by collaborating viewpoints, various data sources, and multiple agents.
New Projects
7. What is an Eigenvalue and Eigenvector?
Eigenvectors are used for understanding linear transformations. They are the directions along which a particular linear transformation acts by flipping, compressing, or stretching. Eigenvalues can be referred to as the strength of the transformation in the direction of the eigenvector or the factor by which the compression occurs. We usually calculate the eigenvectors for a correlation or covariance matrix in data analysis.
8. What is Gradient Descent?
Gradient descent is an iterative procedure that minimizes the cost function parametrized by model parameters. It is an optimization method based on convex function and trims the parameters iteratively to help the given function attain its local minimum. Gradient measures the change in parameter with respect to the change in error. Imagine a blindfolded person on top of a hill and wanting to reach the lower altitude. The simple technique he can use is to feel the ground in every direction and take a step in the direction where the ground is descending faster. Here we need the help of the learning rate which says the size of the step we take to reach the minimum. The learning rate should be chosen so that it should not be too high or too low. When the selected learning rate is too high, it tends to bounce back and forth between the convex function of the gradient descent, and when it is too low, we will reach the minimum very slowly.
9. Differentiate between a multi-label classification problem and a multi-class classification problem.
10. What are the various steps involved in an analytics project?
Understand the business problem and convert it into a data analytics problem.
Use exploratory data analysis techniques to understand the given dataset.
With the help of feature selection and feature engineering methods, prepare the training and testing dataset.
Explore machine learning/deep learning algorithms and use one to build a training model.
Feed training dataset to the model and improve the model’s performance by analyzing various statistical parameters.
Test the performance of the model using the testing dataset.
Deploy the model, if needed, and monitor the model performance.
11. What is the difference between feature selection and feature engineering methods?
12. What do you know about MLOps tools? Have you ever used them in a machine learning project?
MLOps tools are the tools that are used to produce and monitor the enterprise-grade deployment of machine learning models. Examples of such tools are MLflow, Pachyderm, Kubeflow, etc.
In case you haven’t worked on an MLOps project, try this MLOps project by Goku Mohandas on Github or this MLOps Project on GCP using Kubeflow for Model Deployment by ProjectPro.
13. What do you understand by logistic regression? Explain one of its use-cases.
Logistic regression is one of the most popular machine learning models used for solving a binary classification problem, that is, a problem where the output can take any one of the two possible values. Its equation is given by
Where X represents the feature variable, a,b are the coefficients, and Y is the target variable. Usually, if the value of Y is greater than some threshold value, the input variable is labeled with class A. Otherwise, it is labeled with class B.
14. How are univariate, bivariate, and multivariate analyses different from each other?
15. What is K-means?
K-means clustering algorithm is an unsupervised machine learning algorithm that classifies a dataset with n observations into k clusters. Each observation is labeled to the cluster with the nearest mean.
16. How will you find the right K for K-means?
To find the optimal value for k, one can use the elbow method or the silhouette method.
17. What do you understand by long and wide data formats?
In wide data format, you will find a column for each variable in the dataset. On the other hand, in a long format, the dataset has a column for specific variable types & a column for the values of those variables.
For example,
Wide Data Format
Long data format
Image Source: Mason John on Quora
18. What do you understand by feature vectors?
Feature vectors are the set of variables containing values describing each observation’s characteristics in a dataset. These vectors serve as input vectors to a machine learning model.
19. How does the use of dropout work as a regulariser for deep neural networks?
Dropout is a regularisation method used for deep neural networks to train different neural networks architectures on a given dataset. When the neural network is trained on a dataset, a few layers of the architecture are randomly dropped out of the network. This method introduces noise in the network by compelling nodes within a layer to probabilistically take on more or less authority for the input values. Thus, dropout makes the neural network model more robust by fixing the units of other layers with the help of prior layers.
20. How beneficial is dropout regularisation in deep learning models? Does it speed up or slow down the training process, and why?
The dropout regularisation method mostly proves beneficial for cases where the dataset is small, and a deep neural network is likely to overfit during training. The computational factor has to be considered for large datasets, which may outweigh the benefit of dropout regularisation.
The dropout regularisation method involves the random removal of a layer from a deep neural network, which speeds up the training process.
21. How will you explain logistic regression to an economist, physician-scientist, and biologist?
Logistic regression is one of the simplest machine learning algorithms. It is used to predict the relationship between a categorical dependent variable and two or more independent variables. The mathematical formula is given by
Where X is the independent variable, a,b are the coefficients, and Y is the dependent variable that can take categorical values.
22. What is the benefit of batch normalization?
The model is less sensitive to hyperparameter tuning.
High learning rates become acceptable, which results in faster training of the model.
Weight initialization becomes an easy task.
Using different non-linear activation functions becomes feasible.
Deep neural networks are simplified because of batch normalization.
It introduces mild regularisation in the network.
23. What is multicollinearity, and how can you overcome it?
A single dependent variable depends on several independent variables in a multiple regression model. When these independent variables are deduced to possess high correlations with each other, the model is considered to reflect multicollinearity.
One can overcome multicollinearity in their model by removing a few highly correlated variables from the regression equation.
24. What do you understand by the trade-off between bias and variance in Machine Learning? What is its significance?
The expected value of test-MSE (Mean Square Error, for a given value x0, can always be decomposed into the sum of three fundamental quantities: the variance of f0‘(x0), the squared bias of f0(x0), and the variance of the error terms e. That is,
E(y0 − f0‘(x0))2 = Var(f0‘(x0) + [Bias(f0‘(x0))]2 + Var(e)
Here the notation(y0 − f0(x0))2 defines the expected test MSE, and refers to the average test MSE that one would obtain if they repeatedly estimated f using a large number of training sets, and tested each at x0. Also, f0‘(x0) refers to the output of the fitted ML model for a given input x0 and e is the deviation of the predicted valuef0‘(x0) from the true value at a given x0.
The equation above suggests that we need to select a statistical learning method that simultaneously achieves low variance and low bias to minimize the expected test error. A good statistical learning method's good test set performance requires low variance and low squared bias. This is referred to as a trade-off because it is easy to obtain a method with extremely low bias but high variance (for instance, by drawing a curve that passes through every single training observation) or a method with a very low variance
but high bias (by fitting a horizontal line to the data). The challenge lies in finding a method for which both the variance and the squared bias are low.
25. What do you understand by interpolating and extrapolating the given data?
Interpolating the data means one is estimating the values in between two known values of a variable from the dataset. On the other hand, extrapolating the data means one is estimating the values that lie outside the range of a variable.
26. Do gradient descent methods always converge to the same point?
No, gradient descent methods do not always converge to the same point because they converge to a local minimum or a local optima point in some cases. It depends a lot on the data one is dealing with and the initial values of the learning parameter.
27. What is the difference between Supervised Learning and Unsupervised Learning?
28. What is Regularization and what kind of problems does regularization solve?
Regularization is basically a technique that is used to push or encourage the coefficients of the machine learning model towards zero to reduce the over-fitting problem. The general idea of regularization is to penalize complicated models by adding an additional penalty to the loss function in order to generate a larger loss. In this way, we can discourage the model from learning too many details and the model is much more general.
There are two ways of assigning the additional penalty term to the loss function giving rise to two types of regularization techniques. They are
L2 Regularization
L1 Regularization
In L2 Regularization, the penalty term is the sum of squares of the magnitude of the model coefficients while in L1 Regularization, it is the sum of absolute values of the model coefficients.
29. How can you overcome Overfitting?
We can overcome overfitting using one or more of the following techniques
1. Simplifying the model: We can reduce the overfitting of the model by reducing the complexity of model. We can either remove layers or reduce the number of neurons in the case of a deep learning model, or prefer a lesser order polynomial model in case of regression.
2. Use Regularization: Regularization is the common technique used to remove the complexity of the model by adding a penalty to the loss function. There are two regularization techniques namely L1 and L2. L1 penalizes the sum of absolute values of weight whereas L2 penalizes the sum of square values of weight. When data is too complex to be modeled, the L2 technique is preferred and L1 is better if the data to be modeled is quite simple. However, L2 is more commonly preferred.
3. Data Augmentation: Data augmentation is nothing but creating more data samples using the existing set of data. For example, in the case of a convolutional neural network, producing new images by flipping, rotation, scaling, changing brightness of the existing set of images helps in increasing the dataset size and reducing overfitting.
4. Early Stopping: Early stopping is a regularization technique that identifies the point from where the training data leads to generalization error and begins to overfit. The algorithm stops training the model at that point.
5. Feature reduction: If we have a small number of data samples with a large number of features, we can prevent overfitting by selecting only the most important features. We can use various techniques for this such as F-test, Forward elimination, and Backward elimination.
6. Dropouts: In the case of neural networks, we can also randomly deactivate a proportion of neurons in each layer. This technique is called dropout and it is a form of regularization. However, when we use the dropout technique, we have to train the data for more epochs.
30. Differentiate between Batch Gradient Descent, Mini-Batch Gradient Descent, and Stochastic Gradient Descent.
Gradient descent is one of the most popular machine learning and deep learning optimization algorithms used to update a learning model's parameters. There are 3 variants of gradient descent.
Batch Gradient Descent: Computation is carried out on the entire dataset in batch gradient descent.
Stochastic Gradient Descent: Computation is carried over only one training sample in stochastic gradient descent.
Mini Batch Gradient Descent: A small number/batch of training samples is used for computation in mini-batch gradient descent.
For example, if a dataset has 1000 data points, then batch GD, will train on all the 1000 data points, Stochastic GD will train on only a single sample and the mini-batch GD will consider a batch size of say100 data points and update the parameters.
31. How can you make data normal using Box-Cox transformation?
The Box-Cox transformation is a method of normalizing data, named after two statisticians who introduced it, George Box and David Cox. Each data point, X, is transformed using the formula Xa, where a represents the power to which each data point is raised. The box-cox transformation fits the data for values -5 to +5 until the optimal ’a' value that can best normalizes the data is identified.
32. What does P-value signify about the statistical data?
In statistics, the p-value is used to test the significance of a null hypothesis. A p-value lower than 0.05 suggests that there is only 5% chance that the outcomes of an experiment are random and the null hypothesis must be rejected. On the other hand, a higher p-value,say0.8, suggests that the null hypothesis can not be rejected as 80% of the sample has random outcomes.
33. Why do we use A/B Testing?
A/B Testing is a technique for understanding user experience. It involves serving a user with two different product versions to analyze which version is likely to outperform the other. The testing is also used to understand user preferences.
34. What is the standard normal distribution?
The standard normal distribution is a special kind of normal distribution in statistics that zero mean and standard deviation equals one. The graph of a standard normal distribution looks like the famous bell curve with zero at its center. As you can see, the distribution is symmetrical around the origin, and asymptomatic.
35. What is the difference between squared error and absolute error?
In data science, mean squared error is more popular for understanding the deviation of the inferred values from the actual values as it gives relatively more weight to the highly deviated points and gives a continuous derivative which is useful for analysis.
36. What is the difference between skewed and uniform distribution?
A skewed distribution is a distribution where the values in the dataset are not normalized and the distribution curve is inclined towards one side. A uniform distribution on the other hand is a symmetric distribution where the probability of occurrence of each point is same for a given range of values in the dataset.
37. What do you understand by Recall and Precision?
For explaining Recall and Precision, it is best to consider an example of a confusion matrix.
Assume that the confusion matrix mentioned above represents the results of the classification problem of cancer detection. It is easy to conclude the following:
True Positives, No. of patients actually having cancer = 30
True Negatives, No. of patients that do have cancer = 28
False Positives, No. of patients that do not have cancer but the model predicted otherwise = 12
False Negatives, No. of patients that have cancer but the model predicted otherwise = 10
For such problem,
Recall = True Positives / (True Positives + False Negatives) = 30/40 = 0.75
The formula for recall clearly suggests that it estimates the ability of a model to correctly identify true positives, that is, the patients who are infected with cancer. To understand it better, take a careful look at the denominator which is nothing but the total number of people possessing cancerous cells. Thus, a recall value of 0.75 suggests that the model was able to correctly identify 75% of the patients that have cancer.On the other hand, Precision = True Positives / (True Positives + False Positives) = 30/42 = 0.71
The formula for Precision suggests that it reflects how many times the model is successful in deducing True positives wrt the false positives. Thus, the number 0.71 suggests that whenever the model predicts a patient has cancer, the chances of making a correct prediction are 71%.
38. What is the curse of dimensionality?
High dimensional data refers to data that has a large number of features. The dimension of data is the number of features or attributes in the data. The problems arising while working with high dimensional data are referred to as the curse of dimensionality. It basically means that error increases as the number of features increases in data. Theoretically, more information can be stored in high-dimensional data, but practically, it does not help as it can have higher noise and redundancy. It is hard to design algorithms for high-dimensional data. Also, the running time increases exponentially with the dimension of data.
39. What is the use of the R-squared value?
The r-squared value compares the variation of a fitted curve to a set of data points with the variation of those points wrt the line that passes through the average value. It can be understood with the help of the formula
R2 = [Var(mean) - Var(model)] / Var(mean)
It is obvious that the model is likely to fit better than the average line. So, the variation for the model is likely to be less than the variation for the line. Thus, if the r-square has a value of 0.92, it suggests that the model fits the data points better than the line as there is 92% less variation. It also shows that there is a strong correlation between the feature and target value. However, if the r-squared value is less, it suggests that the correlation is weak and the two variables are quite independent of each other.
40. What do you understand by Hypothesis in the content of Machine Learning?
In machine learning, a hypothesis represents a mathematical function that an algorithm uses to represent the relationship between the target variable and features.
41. How will you tackle an exploding gradient problem?
By sticking to a small learning rate, scaled target variables, a standard loss function, one can carefully configure the network of a model and avoid exploding gradients. Another approach for tackling exploding gradients is using gradient scaling or gradient clipping to change the error before it is propagated back through the network. This change in error allows rescaling of weights.
42. Is Naïve Bayes bad? If yes, under what aspects.
Naïve Bayes is a machine learning algorithm based on the Bayes Theorem. This is used for solving classification problems. It is based on two assumptions, first, each feature/attribute present in the dataset is independent of another, and second, each feature carries equal importance. But this assumption of Naïve Bayes turns out to be disadvantageous. As it assumes that the features are independent of each other, but in real-life scenarios, this assumption cannot be true as there is always some dependence present in the given set of features. Another disadvantage of this algorithm is the ‘zero-frequency problem’ where the model assigns value zero for those features in the test dataset that were not present in the training dataset.
43. How would you develop a model to identify plagiarism?
Follow the steps below for developing a model that identifies plagiarism:
Tokenise the document.
Use the NLTK library in Python for the removal of stopwords from data.
Create LDA or SDA of the document and then use the GenSim library to identify the most relevant words, line by line.
Use Google Search API to search for those words.
44. Explain the central limit theorem.
The central limit theorem says that if someone collects a large number of samples of a population, the distribution spread of their mean values will obey the curve of a normal distribution curve irrespective of the distribution each sample obeys.
45. What is the relevance of the central limit theorem to a class of freshmen in the social sciences who hardly have any knowledge about statistics?
The most important consequence of the central limit theorem is that it reveals how nature likes to obey the normal distribution curve. It allows experts from various fields like statistics, physics, mathematics, computer sciences, etc. to assume that the data they are looking at obeys the famous bell curve.
46. Given a dataset, show me how Euclidean Distance works in three dimensions.
The formula for evaluating euclidean distance in three dimensions between two points defined by coordinates (x1,y1,z1) and (x2,y2,z2) is simply given by
Distance = _/ (x1-x2)2 + (y1-y2)2 + (z1-z2)2
It simply represents the length of a line that connects the two points in a three-dimensional space.
47. In experimental design, is it necessary to do randomization? If yes, why?
Yes, it is necessary to use randomization while designing experiments. By randomization, we try to eliminate the bias as much as possible. The main purpose of randomization is it automatically controls for all lurking variables. Experiments with randomization establish a clearer causal relationship between explanatory variables and response variables by having control over explanatory variables.
48. What will be the output of the following R programming code?
var2<- c("I","Love,"ProjectPro")
var2
It will give an error.
49. Find the First Unique Character in a String.
def frstuniquechar(strng: str) -> int:
# Lowercase
strng = strng.lower()
# Here is a dictionary that will contain each unique letter and its counts
c = {}
#Iterating over every letter in the string
for letter in strng:
# If can’t find the letter in dictionary, add it and set the count to 1
if letter not in c:
c[letter] = 1
# If can’t find the letter in dictionary, add 1 to the count
else:
c[letter] += 1
#Iterating the range of string length
for i in range(len(strng)):
# If there's only one letter
if c[strng[i]] == 1:
# Return the index position
return i
# No first unique character
return -1
# Test cases
for s in ['Hello', 'Hello ProjectPro!', 'Thank you for visiting.']:
print(f"Index: {frstuniquechar(strng=s)}")
50. Write the code to calculate the Factorial of a number using Recursion.
def fact(num):
# Extreme cases
if num< 0: return -1
if num == 0: return 1
# Exit condition - num = 1
if num == 1:
return num
else:
# Recursion Used
return num * factorial(num - 1)
# Test cases
for num in [1, 3, 5, 6, 8, -10]:
print(f"{num}! = {fact(num=num)}")
Out of L1 and L2 regularizations, which one causes parameter sparsity and why?
List the differences between Bayesian Estimate and Maximum Likelihood Estimation (MLE).
Differentiate between Cluster and Systematic Sampling?
How will you prevent overfitting when creating a statistical model?
Explain the range function.
How can you freeze an already built machine learning model for later use? What command you would use?
Differentiate between func and func().
Write the command to import a decision tree classification algorithm using sklearn library.
What do you understand by pickling in Python?
How can you ensure that you don’t analyze something that ends up producing meaningless results?
Understanding whether the model chosen is correct or not. Start understanding from the point where you did Univariate or Bivariate analysis, analyzed determine whether the resultant models are similar and are performing well.
By looking at the p-value, by looking at r square values, by looking at the fit of the function, and analyzing as to how the treatment of missing value could have affected- data scientists can analyze if something will produce meaningless results.
- Gaganpreet Singh, Data Scientist
So, there you have over 120 data science interview questions and answers for most of them too. These are some of the more common interview questions for data scientists around data, statistics, and data science that can be asked in the interviews. We will come up with more questions – specific to language, Python/ R, in the subsequent articles, and fulfill our goal of providing 120 data science interview questions PDF with answers to our readers., Spark or any other big data technology ensure that you can back this up but if you are not strong in a particular area do not mention it unless asked about it. The above list of data scientist job interview questions is not an exhaustive one. Every company has a different approach to interviewing data scientists. However, we do hope that the above data science technical interview questions elucidate the data science interview process and provide an understanding of the type of data scientist job interview questions asked when companies are hiring data people.
We request industry experts and data scientists to chime in their suggestions in comments for open-ended data science interview questions to help students understand the best way to approach the interviewer and help them nail the interview.
Related Posts
Python Data Science Interview Questions
Data Science Interview Questions for R
Data Scientist Interview Questions asked at Top Tech Companies
Data Analyst Interview Questions
Prepare for your Next Machine Learning Job Interview with Commonly Asked NLP Interview Questions and Answers
Prepare for Your Next Big Data Job Interview with Kafka Interview Questions and Answers
Access Job Recommendation System Project with Source Code | https://www.projectpro.io/article/100-data-science-interview-questions-and-answers-for-2021/184 | CC-MAIN-2022-40 | refinedweb | 4,978 | 53.92 |
Background
One of the biggest pieces of feedback we received from the N-Tier Improvements for Entity Framework post as well as other sources was: “low level APIs are great, but where is the end-to-end architecture for N-tier and the Entity Framework?”. This post outlines some of the additional feedback we’ve received and describes the self-tracking entities architecture that will ship along side of VisualStudio 2010 and .NET Framework 4.0.
Self-tracking entities know how to do their own change tracking regardless of which tier those changes are made on. As an architecture, self-tracking entities falls between DTOs and DataSets and includes some of the benefits of each.
Drawing from DataSet
DataSet on the client tier is very easy to use because there is no need to track changes separately or maintain any extra data structures that include change tracking information. DataSet takes care of serializing state information for each row of data. On the mid tier, applying the changes stored within a DataSet is straightforward. DataSets have also gained popularity because of the number of tools that work with DataSets, and because they are easily bound to many UI/presentation controls. Since it works in so many scenarios, for many applications there is never a need to transform data outside of a DataSet allowing a single paradigm to be used up and down the stack.);
It is extra work to ensure that the DataSet that is returned contains only rows of data that pertain to “customers”. It is also more difficult to specify a service contract that says the DataSet is also supposed to return data for each customer’s orders and their order details.
DTOs and SOA are used to give the developer more control over the service contract and payload for tier to tier communication. DTOs themselves do not have behavior, so they are typically very simple classes designed just to provide the needed information to perform a specific service operation. Not only does this provide the opportunity to optimize the wire format (some believe a DTO is all about the wire format), but it makes it possible and easy to capture the intent of each service method. The data contract used with DTOs is typically interoperable which makes it easy to use services that run on different platforms. DTOs also provide a way to separate messaging contracts from the presentation layer, the business logic, and the persistence layer which in many cases creates a maintainable architecture.
With the .NET Framework 3.5SP1, this sort of solution was very hard to implement using the Entity Framework because change tracking was always done by a centralized ObjectContext which contains an ObjectStateManager. In particular, “reattaching” to report changes back to the ObjectStateManager was all but impossible without completely shredding your entity graph and applying changes one entity and one relationship at a time. The new API changes that are being added to the Entity Framework in .NET Framework 4.0 are enablers for an easier experience of reporting changes back to the ObjectStateManager, making self-tracking entities (as well as other architectures) easier to build.
Using self-tracking entities on the mid-tier is about working with entity graphs and the Entity Framework. The service contract that is used in the following examples contains two simple methods for retrieving a Customer entity graph and applying updates to that entity graph:
interface ICustomerService
{
Customer GetCustomer(string customerID);
bool UpdateCustomer(Customer customer);
}
The implementation of the GetCustomer service method can be done using an Entity Framework ObjectContext and using LINQ or query builder methods to retrieve the entity you want. In the example below, a query is issued for a particular customer entity and all of the Orders and OrderDetails are included.
public Customer GetCustomer(string customerID)
{
using (NorthwindEFContext context = new NorthwindEFContext())
{
var result = context.Customers.
Include("Orders.OrderDetails").
Single(c => c.CustomerID == customerID);
return result;
}
}
UpdateCustomer is example of how to save the changes that are made to a graph of self-tracking entities. Similar to DataSet, applying these changes to the persistence layer and saving them should be simple. In the below example, a new Entity Framework API, “ApplyChanges” is used which understands how to interpret the change tracking information that is stored by each entity and how to tell the ObjectContext’s ObjectStateManager about those changes.
public void UpdateCustomer(Customer customer)
{
using (NorthwindEFContext context = new NorthwindEFContext())
{
context.Customers.ApplyChanges(customer);
context.SaveChanges();
}
}
Client Experience
The client experience when working with self-tracking entities is similar to how you would manipulate any object graph. You can make changes to scalar or complex properties, add or remove references, and add or remove from collections of related entities. The key part of the experience is that tracking changes is hidden from the client because it is done internally on each entity. There is no ObjectContext and there is no extra state that has to be maintained or passed from client to the tier that does persistence.);
}
}
The next test shows how to add a new Order with two OrderDetails to an existing customer. By default, the constructor of a self-tracking entity puts the entity in the “Added” state. There are conveience methods on each self-tracking entity to change this state if needed (Delete() and SetUnchanged()).);
}
}
Generating Self-Tracking Entities.
Design Notes
Inside a Self-Tracking Entity
There have not been final decisions about what exactly will be included inside of a self-tracking entity, but it is important to track:
- The state of the entity, Added, Deleted, Modified, or Unchanged
- The original value from reference relationship properties
- Adds and removes from collection relationship properties
This post is part of the transparent design exercise in the Entity Framework Team. To understand how it works and how your feedback will be used please look at this post.
Lots of the feedback we got on the EF design blog about our early N-Tier plans , highlighted that lots
I’d like to turn your attention to two blog posts on EF Design Blog about exciting new features planned
Sounds great! Any plans for a beta/ctp?
Sounds great to me too. This is the most important feature to me since working with the entityBag or any other solution led to a big BIG mess…
although i didnt understand how the "self-track-entities" will integrate with poco 🙁
@onemenny , @jaroslaw how about making "self-tracking" an inheritable base class (for poco).
e.g.:
[Entity]
public class MyEntity : ChangeTrackedEntity
{
[Member("ID")]
public int id { get;set;}
}
Thank you for submitting this cool story – Trackback from DotNetShoutout
Kristofer – i think it better be an interface of some kind or a wrapper class (injection), since it can mess with your domain model…
Fantastic news!
It seems to me that this would be best implemented in an interface. In that way it could be used more generally and even by other ORMs.
It could be really handy for generic change logging?
This DTO’s with self tracking are not inherited by a special base class? This DTO’s for the client should know nothing about the framework they will persisted with.
What is about a kind of client side-tracking collection? Kind of EntityBag? This collection knows all entities and tracks all changes client side. For updating the entities, this collection calls serveral service-functions for updating, saving and deleting.
Also see.
Thank you,
Marco
Jeff, the scenario-based overview sounds great but I think a bit of elaboration is needed on how this would work with POCO.
Based on what I have seen for the vNext release I would like to
1) poco entities that are self-tracking
2) shared library of these poco objects across tiers
3) use the new "agile" method of creating an object context using attributes instead of .edmx
How would self-tracking entities work in this scenario?
This is exactly what we need. Looking forward to seeing this.
Excellent!
However we also need Validation capabilities
And the ability to chain specific commands to Save.
I.e. I have an invoice
I need to validate the invoice, the invoice line items.
This should not fail with an exception, but report the error and allow all data-bound objects to automatically report the errors.
Further, once the invoice is saved, I need to check the state of the invoice. If it’s set to post, I then need to post the G/L entries for the invoice to the database. This has to be done only on successful save, no where else.
There should be events for all of these things in this model:
Saving (CancelableEventArgs)
Saved(EventArgs)
If I had this, along with the ability to easily inherit from my own base class, I would be set (mostly)
@James Hancock
James, I think a "ChangeInterceptor" similar to what is in ado.net data services would work for this.
A couple of years ago I was involved in a couple of threads about doing change-tracking in client-side
@onemenny, @kristofera
We want self-tracking entities to be POCO entities in that they have no special dependency on the Entity Framework or any of its base classes. In .NET 3.5SP1 the only options you had were to derive your entities from our EntityObject class or to implement interfaces that are specific to the Entity Framework (IEntityWithRelationships, for example). That is decided not POCO because you are forced to have a dependency between your domain classes and the EF, which depending on your scenario may not be desirable. So having self-tracking entities require a base class or an interface that is part of the Entity Framework assembly would fall into the “not very POCO” category, and we want to avoid that.
It is likely that we will have an interface for self-tracking entities, but the interface would exist as part of your code-generated template for self-tracking entities so the interface would be specific to the assembly containing your domain classes, not the EF assembly. However, that may not be POCO enough so we are exploring other options as well, including using some dynamic way of “discovering” certain properties on your entity that contain change tracking information. We could also look at different dependency injection techniques (such as dynamically generated proxies), but those can be very hard to deal with in serialization scenarios. I will follow-up with a post on what we can expect the entities to look like, but for now the important point is that we will not require an interface or base class that is part of the Entity Framework to use self-tracking entities.
@lynn
Our hope is that you’ll be able to accomplish those three things. Self-tracking entities do not have a dependency on the Entity Framework, so they are POCO in that way. However, there still needs to be logic in that class that can do the self-tracking bit. Our goal would be that this is simple enough to write and could be used with agile approaches, even ones where you may not have a model (or database) yet. If you already have a model, then you can use the T4 template that will generate self-tracking entities to get started.
@James
In the next release, the ObjectContext’s SaveChanges method will be virtual so you’ll be able to do this kind of validation in your ObjectContext class.
@Jeff
Jeff, looking forward to your follow on.
For my two cents, anything you can make happen with the use of attributes is a plus.
@lynn,
One dilemma with using attributes is that the attribute classes need to be defined somewhere which causes a dependency issue…
– Danny
I’d like to second James’ request for entity-specific partial methods or events when saving an entity. Having "OnSaving" and "OnSaved" hooks to tie in logic when a single specific type of entity is saved and you want to take some additional action (like saving an audit record or recalculating a cached value) would be really useful. Having the ability to hook in to this type of event for each entity class could be preferable to overriding the ObjectContext’s catch-all SaveChanges method to implement this type of entity-specific logic.
We’ve used the OnValidate method in Linq2Sql quite a bit and wish we had something similar for OnSaving and OnSaved to help complete the validation picture.
Speaking of which, it could also be handy to be able to somehow get access to the current ObjectContext from within an entity. So for example:
OnValidate(EventArgs e) {
// validate that the email isn’t already in use
var existingCustomer = this.CurrentObjectContext.Customers.SingleOrDefault(o => o.Email == this.Email);
if (existingCustomer != null) throw new ValidationException("A customer with that email already exists!");
}
OR
OnSaving(EventArgs e) {
// add a log of this transaction
e.ObjectContext.AuditLogs.Add(new blah);
}
Perhaps if it isn’t possible from anywhere within an entity, it could at least be made accessible from the event/method arguments for certain things like validation.
This would allow you full access to the database when an entity is validated or saved, which would be extremely useful in certain architectures.
I need to do more reading to understand the new EF.
Thanks MS for making the process more transparent.
But as a LINQ 2 SQL guy, things like this scare me
Include("Orders.OrderDetails")
Why are there strings in Data Tier like this? this seems like a C#3 fail to me. Surely you can use lambda expressions to capture these includes and memoize them for speed.
@ Danny
Understood about attributes.
Since this is a major release release might it be possible to get a set of generic enough attributes for your purposes somewhere in the system dll, perhaps under System.ComponentModel or sub namespace?
Vijay, we agree that lambda expressions could provide a nicer syntax for Include (although for multi-level paths, the lambda-based syntax becomes more complicated than the string version). In any case, at this point it doesn’t look like this improvement will make it in.NET 4.0. We appreciate your feedback.
@Diego
That’s too bad, it would be really nice to have lambda-based syntax for Include. I thought getting rid of "magic strings" and having compile-time checking of queries was one of the main purposes behind LINQ. 🙁
I’m definitely another person who would love to see strongly typed Include statements make it in sooner rather than later. It has always seemed out of place using strings there.
A change tracking mechanism would be very nice in Entity Framework but I venture to doubt that the solution of self tracking entities can solve the problem.
Tracking properties on entities would be a good start, but who is responsible to set this properties?
Self tracking entities must have behavior to track its change state.
Can you tell me how this can work without shared library of these poco/self tracking entity objects across tiers when using a modern technology like webservices or wcf over metadata contracts.
And whats about interoperability? How can a java client consuming a webservice and track the change state of the entities?
I think you should change name of the Delete method to MarkAsDeleted.
Based on my experience with EF on a real project it’s very good approach. One of the problems we were working hard to accomplish is dealing with long-living entities that has to leave for a few http requests. It’s not a good solution to keep EF context so long so we found workarounds here. Now it’s not a problem anymore.
However, there is one question. If such entity is formally "detached" from the context how will the lazy loading (implicit or explicit) work? Based on your example with Include("Orders.OrderDetails") it looks like we will need to create the whole graph we are going to work with from the database. If it’s incorrect I can see 2 solutions:
1) Implicit creation of the EF context and filling the navigation properties (then what about POCO?)
2) Explicit passing the EF context to some sort of lazy loading method and using it
Which way (or probably the third one) would you use to implement this?
Thank you.
Команда ADO.NET продолжает свой эксперимент по публичному проектированию следующей версии Entity Framework
Will there be support for filtering entities while traversing relationships? (E.g. Ability to add lambda-based filtering for the Load method).
Consider this Relationship:
Customers -> Orders -> OrderDetails -> Products
Use Case: Get all related entities for the following conditions: All customers in a particular zipcode; select their orders that were created within the last 6 months; For those orders, select only those order details with amount > $100; for these order details, select the corresponding products. (This filtering could be done in memory, but the data set size may be too prohibitive)
In EF V1.0, there are a few ways to accomplish this, but they seem to either result in multiple trips to the database or involve multiple steps (fetch more data than needed and then remove them) or involves repeating all the conditions for each of the predecessors in the children’s LINQ queries (in one sense, the problem could be defined as trying to fetch related entities for objects already in the entity conext — applying additional filters and get related entities in one trip to the database per each related entity type) .
The main explicit methods available in 1.0 for getting related entities are Include and Load. Both of them don’t seem to support additional filtering. I would prefer to get this filtering support added to either the Load method or to the child’s LINQ query. (the include method is less preferrable because of the use of strings).
Interesting –
I have a large project implemented using self tracking entities. These entities travel between tiers using web services. While its a viable solution, one pitfall that always comes up is that after doing an update, a new object comes down from server. All references now need to be updated. This becomes really challenging with data binding and events. Events have to be non serialized, and you have to have compensating code to reconfigure events.
I am intrigued by the RIA services solution to some of these problems. I am sure there are some new pitfalls in that solution, but have you guys considered looking at it for ideas? Especially since its a MSFT technology?
@Roger
For solutions using self-tracking entities, the entity definition itself contains information on what changes have been made. So the self-tracking entity definition must exist on both the client and the service. The entity definition does not contain any reference to a particular persistence framework; the change tracking information is something like a List<object> AddedRelationships, where “object” is an Order entity, or a Product entity, or some other entity that can be related. There is logic inside the self-tracking entity that records these changes automatically whenever properties change. The client can make changes to a property any way they want: directly, through data binding, etc., and each change triggers the self tracking entity to change state, or record a new or removed relationship.
Self-tracking entities are also interoperable because there is a very clean WSDL description of each entity containing the current entity property values plus the change tracking information which is likely to consist of a state value (Add, Modified, etc.), as well as the state of various relationships to other entities (AddedRelationships: this with reference to O1). Java clients could produce a similar payload to send to services that accept self-tracking entities as part of their data contract. As with any interoperable solution, the non-.NET client would need to actually mimic that payload. One of our goals is to make the payload as clear as possible so that this can be done easily.
@Eugene
The problem you describe is something that is not baked into the self-tracking entities pattern directly. The issue boils down to being able to effectively do identity resolution on your client for round tripping. This is needed for many operations and is especially important when “bulk refreshing” your data binding is not possible (or, as you said hooking p events). There are some other solutions we are thinking about, such as an evolution of EntityBag that give you this sort of capability. If you are interested I can give you more details here. We are also working with the RIA Services teams to make sure that entities work well with their client side change tracker.
@Merle
Lazy loading requires a way to inject knowledge of an active context into an entity. On the service tier this is possible because this is logically where you’d want to create contexts, query for entities, etc. When you define your self-tracking entities, our cue to turn on lazy loading is to make the navigation properties virtual (references and collections to related entities). You are correct that using lazy loading in a detached setting is difficult. This wouldn’t work on the client because in many cases you do not to create active contexts on this tier that provide a direct line of sight to your database. However, once you return the entity to the service tier, you can attach it to a context (context.Customers.Attach(…)) and perform lazy load. Are you asking about doing lazy load on the client?
@Jeff
> Are you asking about doing lazy load on the client?
Not actually. It’s clear that it’s impossible to get the lazy loading work on the client when you don’t have the ObjectContext at all (except additional query to the server). I was asking about the server-side solution. I thought when you create the self-tracking entity it is automatically detached from the ObjectContext it was created with to avoid any side-effects and be a POCO object. Is it correct assumption? If it’s correct then it might be difficult to work with this entity using entity.Collection.Load() if you haven’t load it’s navigation collections when you get this entity from the context. That’s why I asked about the possibility to inject the different ObjectContext to the entity so it can be used to retrieve this data afterwords or some other solution you came up for this situation. Could you please tell me whether my assumption is correct and what would you suggest for such scenario? Thank you and keep up a great work!
@Jeff –
Thanks for the reply. If you are willing to share more info on Entity Bags I would be interested in learning more.
As for you analysis of my problem, I agree, its larger than just self tracking entities. However, from the reading/playing that I have done, I think the RIA services team is delivering more than self tracking entities as well.
They are using code generation to create "client side" objects and then have some base classes that move data back and forth. This means that the actual objects no longer need to travel back and forth, which makes the event hookup / data binding / reference updating easier.
These client side objects do track their state though, and there can still be shared code between the client and the server to do validation and business rules.
I guess it just seems to me like you guys are both implementing pieces of an ntier solution. I think the 2 can plug in and play.
Eugene.
Today I was looking at a post in the forums where someone asked a very natural and common question about
One quick question, while I’m wrapping my mind around this post…
What is the distinction between "ApplyChanges" and "SaveChanges"? Is it the difference between persisting to the model and persisting to the database?
@Remi,
Yes, the distinction between ApplyChanges and SaveChanges is something like that. The context has an ObjectStateManager which keeps track of changes that need persisted to the database and provides that information to the rest of the EF. Since the context and state manager don’t serialize with the entities, we need a way to push changes from the self-tracking entities into the context so that when you call SaveChanges the database will be updated.
– Danny
I agree that it looks like an other approach to ntiers solution than the one of RIA services team. I hope it’s not going to be like EntityFramework/LINQToSql competition !
Questions :
Is the RejectChanges() scenario will be supported by your SelfTrackableEntities ?
Is the validation messages will be also supported. TrackableEntities implementing IDataErrorInfo that read messages in the inherited tracking info datamember (filled by the server that has the business logic to apply the validation rules ) ?
By the way, we really need a IDataErrorInfoEx that includes the notion of severity of the message (error/warning/info)
For the include statement that should be in strongly typed Include statements instead of "string" we use the following solution :
O último post do time do Entity Framework é bem interessante no contexto dos assuntos discutidos neste
I’d love a mix of DTOs and tracking. It can be one more option for the next version.
DTO’s can protect us against errors at the client side, like updating a field that should be not reacheable in that context. That’s what OO is all about.
regards
Building applications that work across tiers is a core part of many application architectures. In .NET
I received a mission to explore entity framework as a DAL for a new project, this project is intended
This framework sounds very similar to the CSLA.NET. Is this true?
While I can see the usefulness of self-tracking entities in a pure Microsoft stack, what options to we have for managing change tracking in a true SOA environment with a hybrid of different technologies, including .NET, Java, AJAX calls from web clients, etc?
Self-tracking entities are great when your always working with .NET on both sides of the wire…but more and more, the client tends to be something else…Java or Javascript/AJAX most frequently in my experience.
Is our only option to attach the entity as fully changed, and save the whole thing? Or is there a more efficient option?
@Jon,
While self-tracking entities are aimed at making especially easy scenarios where .Net is available on the client, we have also been giving some thought to interop. The intention is that the serialization XML for the entities (including the tracking information) will be very clear and easy to work with in other platforms. You will have to write client code which does tracking and supplies the tracking info ina compatible format, but that should be possible.
We have been debating about just how much change tracking information to include. Certainly we will include information about the state of each entity and about the relationships between them, but we originally thought we would avoid tracking exactly which properties of an entity were modified and instead just plan to send the whole entity to the database each time (very reasonable in many situations but not for some)..
– Danny
This post runs through a simple demo of the new N-Tier focused APIs in the in ADO.Net Entity Framework bits (EF4) that are included in Beta 1 of Visual Studio 2010 / .Net Framework 4.0. For this d …
This post runs through a simple demo of the new N-Tier focused APIs in the in ADO.Net Entity Framework bits (EF4) that are included in Beta 1 of Visual Studio 2010 / .Net Framework 4.0. For this d …
@Dan Simmons,
Thanks for the reply Dan. Good to know your trying to keep change tracking open. While I personally prefer to work in an entirely .NET stack, other individuals at a higher pay grade often take that decision out of my hands. I’ve been both lucky (expanding my horizons), and unlucky (having to deal with cross-platform implementation issues and complexity), enough to be working in a hybrid .NET/Java environment these days. So an open, simple format is good news.
In response to something you mentioned in your reply, I am curious if you could expand upon it a bit more. If the intention with self-tracking is to provide a simple and open format that any client could use to transfer state-change details back to a service with…then the following statement confuses me a little:
. <–"
When it comes to…disconnected change tracking, is the concept of "original value" really useful, or is it just a waste of space and bandwidth? We have a full entity, and it will be tracking its own changes. I don’t think that knowing the original value as well as the current value matters as much as knowing that the value changed. Differentiating between the current value and a value that is being set, and not flipping that "changed" bit in the event the two values are equal, should be the responsibility of the self-tracking entity its elf (regardless of platform). Correct?
We can bloat the self-tracking entity message schema to track original values, but it seems more logical to simply track a map of which values have changed. The end goal is to overwrite whatever value is in the database anyway, so the original value has no meaning. A change conflict could be detected by a database-level version (i.e. SQL Server rowversion/timestamp), thus preventing last-in-wins, correct? Is there something I am missing that would make original values useful in the self-tracking context?
To expound a bit more on why I think tracking original values could be wasteful (because I’m sure some will say it wouldn’t be that much space/bandwidth.) I am thinking in the context of strings mostly. For example, if I have a Blog entity, which contains an author id, date posted, a list of tags (string, but generally small), and the contents of the blog post itself. Tracking original values for the author id, date, and tags really isn’t a huge deal. However, tracking the original value for the contents means I am likely doubling the size of my self-tracking entity message, when all I really need is to know that the contents did, indeed, change. A simple boolean along the lines of: <changes><property name="Body"/><changes> would do the job, and keep things compact. This goes for pretty much any large string value, which occur a lot in things like forum posts, product descriptions, email messages, blogs, digital newspapers (pretty much all text), etc.
Sorry for not clarifying in my previous post.
@Jon,
Well, you are right that sometimes keeping around the original value is not nearly so interesting as just knowing that the column has changed.
It turns out that this is a rather tricky topic. Sometimes we just want to know a column has changed (like your scenarios above where the original value isn’t interesting and the values are large so keeping the original value is wasteful), while other times we really do need to know the original value, and still other times we need the original value, but it’s very likely that anyone would change the value since it’s a server-generated rowversion or something so keeping the original value is a waste and we can just treat the current value as the original value.
The tricky ones are the middle case (where we really need the original value), and they come up in two scenarios. The first one is when you have a concurrency value that is determined on the client rather than the server. In this case you need the original value to check for concurrency conflicts, but you also need the current value to set the property to if there isn’t a conflict. This is unusual because it’s hard to do this correctly, usually you want the concurrency value to be automatically generated by the server. The second case is a bit more obscure, but it is possible to create certain mappings where the value of a property can change how the entity is persisted to the database, and in these cases we need the original values so we know how to undo the old persistence and then do the new one–imagine for instance that depending on a value an entity might be persisted in part to one table or to another table. If the value changed, then we would need to remove a row from the old location and add a row to the new one.
It is possible for us to determine most if not all of this information at code generation time from the metadata, so we could generate classes that are more precise in the way that they handle original values to sometimes store them and other times just store a bit indicating that something has changed, but then we start getting into cases where interoperability becomes harder because the wire format becomes more complex. Until now we have favored either going the route of never storing original values which makes for a simple, efficient wire format and simple client classes but rules out some mapping scenarious and can produce less efficient database updates, or always toring original values which still keeps the wire format and client classes relatively simple, and allows for efficient database updates, but makes the wire format less efficient.
Neither of the options seems perfect, but it may be that one or the other is best in light of trying to keep the wire format simple. It’s always possible to take our self-tracking template and modify it for more specific needs or to avoid code gen altogether and write DTO classes by hand to handle n-tier for those cases where you need the most flexibility and efficiency for a particular sceanrio.
– Danny
Building applications that work across tiers is a core part of many application architectures
@Dan,
Thanks for the clarification. It definitely sounds like a more complex issue that it appears at first glance. I think message efficiency is a very critical factor, especially considering some of the scenarios I mentioned (which, if the "always use original values" route is taken, could effectively halve the overall message throughput in those scenarios.)
I think flexibility is key here…and if there is anything we have all learned from EF v1.0, its that the community needs a great breadth of flexibility to accommodate existing needs. It may make the wire format a bit more complex, but I think neither option 1 (never use original values) nor option 2 (always use original values) is going to be sufficient to meet the needs of everyone involved (including the EF team, who needs to support as many mapping scenarios as possible.)
While I love to use simple formats whenever possible, if a little added complexity will give me the ability to tune my efficiency, I’ll take that complexity every time. Limiting my options and not giving me the ability to tweak and tune (i.e. go from just using simple flags to a hybrid of flags and original values where required) is a sure-fire way to make me abandon the rigid, inflexible framework in favor of a more flexible one, or build something in-house that meets my needs.
Code generation has been mentioned several times. I’m wondering if it is still expected that you are generating your data model from an existing database, or, is model first supported? i.e. I want to define my database by defining classes and then generating the database from that, a la Hibernate. Is this something that can be done in a pain free way, or, will I be going against the grain and having to implement a lot of methods needed by EF manually?
@Jon: I think the following entry from this blog will answer your questions:
Introduction My blog has been feeling very neglected after my move to the US and I thought with the recent
Hi!
Is there any release date for the self tracking entities T4 template, or any preview of it?
Thanks
@Ben,
We don’t yet have an official release date, but I can help you narrow it down a bit. It’s getting close, but it won’t be for a few weeks yet.
– Danny
Entity Framework 4.0 Beta 1(又称EF V2)与 .NET 4.0 Beta 1 一起发布,包含了一系列的重要改进:自定义代码生成、延迟加载、N层支持、POCO支持
This looks like it should solve one of the issues that I have had to handle in my code, but there are a couple of points that I would like to clarify.
As these are self-tracking entities, am I right in thinking that the query in GetCustomer should be a MergeOption.NoTracking query or alternatively that the entity graph needs to be detached manually ?
If I recall correctly (I don’t use NoTracking myself), NoTracking queries do not perform identity resolution (and hence there is a possibility of duplicate entities) and also they do not populate EntityReference EntityKey values that are not Included.
Also, am I right that Detaching an entity detaches only that entity and drops all the navigation properties ?
Graham,
Your recollection of the behavior of NoTracking queries and Detach is correct.
Self Tracking Entities, in fact, do not make use of either, at least in mainline scenarios. Instead, it is the work of serialization to create a snapshot of the full graph of tracked entities.
Hope this answers your question,
Diego
Thanks Diego,
just to confirm, are you saying that entities can be both self-tracking and attached to an object context ?
awesome, I’m just in the middle of the artcile but so excited 🙂 so I have just one question…
why woudn’t I want to use self-tracking entities, when it is so coool?
no more UOW problems and referencing it via many tiers with problems where to store it (session) for entities to be reatached later and many other problems than other mappers face…
here, I can code as Conan the Barbarian style, "opening the connection", e.g. "using" Context just when I want to store final entity, and that’s it.
I don’t need to have it opened all the time, e.g. without "using ctx = …" I can’t do any work right now.
Unless I am missing something this sounds too easy to be real 🙂 must be some drawbacks
Storing the original values also allows you to handle concurrency issues. So if two users update the same Contact record, the last should fail with something like a RecordChangedByAnotherUser exception.
It would be nice if EF allowed you to specify the WHERE clause for the update. So for example, typically
UPDATE contact SET name=:name WHERE id=:id
what is really needed for concurrency is:
UPDATE contact SET name=:name WHERE id=:id AND lastModified=:lastModified
The Borland ClientDataSet had an option for this:
– UpdateWhereKey – only use the Pk
– UpdateWhereAll – put all the old values in the where
– UpdateWhereChanged – only put changed fields in the where. I don’t particularly like this, because two simultaneous updates could occur producing a result that actually violates business rules.
– UpdateSpecify – specify the set of fields to include in the where.
I am with cowgaR. Am I missing something? Also, what is wrong with IEditableObject as the interface?
I agree with Vijay Santhanam and ManojR. Support for filtering when doing Load or Include is one of the things we really missed in EF v1.
How do you recommend one solves such filtering problems?
Thanks for writing this article.
I must admit, overall I am quite disappointed with Microsoft and in particular the EF group within Microsoft for not releasing more documentation that demonstrates how to use the EF in common ntier development models. I have been searching Google and Bing for months now, since the release of VS2010, and almost all of the articles I find are for old versions of the entity framework.
Why are no articles being written? Are too few people adopting the EF? I stick with ADO.NET for my projects simply because it is well documented and proven. EF?? Even the team behind it doesn't seem to do a good job promoting it! What kind of a future can it have??
All of my excitement over the EF has been turned into disappointment.
please fix broken links in this rather fundamental blog (e.g. 3rd and 4th link)…
cheers
How to handle IEditable Interace for Reference data type
@acesdesign I think the following entry from this blog will answer your questions
Nice tutorial on Self-Tracking Entities. It help me to understand it how it works. Thanks a lot for useful info.
Waqas
webdesignpluscode.blogspot.com
Great article, you can also check this 2 parts article on SQL Server Change tracking , Part 1 – sqlturbo.com/practical-intro-sql-server-table-change-tracking
and Part 2 sqlturbo.com/practical-intro-sql-server-table-change-tracking-part-2-column-tracking | https://blogs.msdn.microsoft.com/efdesign/2009/03/23/self-tracking-entities-in-the-entity-framework/ | CC-MAIN-2016-36 | refinedweb | 6,884 | 59.43 |
Last will be entered using the rules on timing outlined in “What is Morse Code?“
- An acceptable tolerance will be built in to timing, since it’s difficult to keep an exact rhythm.
- Blink a blue LED continuously, to the rhythm of the “base time”, to help with timing.
- Interpret dots and dashes using International Morse Code (IMC) (see “What is Morse Code?“)
- The message separator prosign AR ·-·-· will indicate the end of the message, after which the script will display its interpretation.
- When a dot or dash is timed correctly, blink a green LED 3x rapidly; otherwise, blink red 3x
- When a dot/dash sequence (indicated by a gap equal to 3 dots) is interpreted - as a valid letter/number, blink a green LED 3x rapidly,
- as invalid/unrecognized, blink a red LED 3x rapidly, then discard the sequence.
That might not be everything, but it does give us a general direction to run.
Designing the Circuit
Now let’s decide what we need in a circuit, based on the rules we just laid out.
- A button to “transmit” dots and dashes.
- A line from 3.3v through a 220Ω resistor, to one side of the button (let’s call it side 1).
- A line from the other side of the button (side 2) to pin 31 (GPIO 6).
- A 10kΩ pulldown resistor from pin 31 to ground. (Read about pullup / pulldown resistors.)
- A yellow LED and 220Ω resistor from side 2 of the button, to ground.
- A red LED and 220Ω resistor connecting pin 36 (GPIO 16) to ground.
- A green LED and 220Ω resistor connecting pin 32 (GPIO 12) to ground.
- A blue LED and 220Ω resistor connecting pin 11 (GPIO 17) to ground.
Here’s the kind of layout I planned out.
In retrospect, that resistor connecting the cathode side of the yellow LED to ground won’t hurt, but it’s unnecessary, since there’s already a resistor connecting 3.3v to the button.
And here are a few pictures of the actual board after I wired it up:
Writing the Script
Here’s the script, which is also available on GitHub:
import datetime import threading import time import RPi.GPIO as GPIO import InternationalMorseCode as ICM BASE_TIME_SECONDS = 1.0 TOLERANCE = BASE_TIME_SECONDS / 2.0 # Initialize GPIO settings def initialize_gpio(): GPIO.setmode(GPIO.BOARD) GPIO.setup([11, 32, 36], GPIO.OUT) # LEDs: Blue (metronome), Green (ok), Red (error) GPIO.setup(31, GPIO.IN) GPIO.output([32, 36], GPIO.LOW) GPIO.add_event_detect(31, GPIO.BOTH, callback=intercept_morse_code) # Blink a blue LED on/off (one full cycle per BASE_TIME_SECONDS) def metronome(): while True: GPIO.output(11, not GPIO.input(11)) time.sleep(BASE_TIME_SECONDS / 2.0) def initialize_metronome(): t = threading.Thread(target=metronome) t.daemon = True t.start() # Blink an LED on and off a few times rapidly, to signal success or failure def signal_to_user(channel): for num in range(1, 3): GPIO.output(channel, GPIO.HIGH) time.sleep(0.1) GPIO.output(channel, GPIO.LOW) time.sleep(0.1) def initialize_signal(channel): threading.Thread(target=signal_to_user, args=(channel,)).start() last_edge = GPIO.LOW press = datetime.datetime.now() release = datetime.datetime.now() # Intercept a rise or fall on pin 31 (button press/release) def intercept_morse_code(channel): global last_edge, press, release # Button pressed - determine if start of new letter/word if GPIO.input(channel) == GPIO.HIGH and last_edge == GPIO.LOW: last_edge = GPIO.HIGH press = datetime.datetime.now() detect_termination() # Button released - determine what the input is elif GPIO.input(channel) == GPIO.LOW and last_edge == GPIO.HIGH: last_edge = GPIO.LOW release = datetime.datetime.now() interpret_input() sequence = "" letters = [] words = [] # Detect whether most recent button press is start of new letter or word def detect_termination(): global sequence if sequence == "": return delta = calc_delta_in_sec(release, press) # Check for start of new letter (gap equal to 3 dots) if (delta >= ((BASE_TIME_SECONDS * 3) - TOLERANCE)) and (delta <= ((BASE_TIME_SECONDS * 4) + TOLERANCE)): process_letter() # Check for start of new word (gap equal to 7 dots - but assume anything > 7 dots is valid too) elif delta >= ((BASE_TIME_SECONDS * 7) - TOLERANCE): process_word() # If it's not a new letter or word, and it's a gap greater than a single dot, tell the user elif delta > (BASE_TIME_SECONDS + TOLERANCE): print("") # Process letter def process_letter(): global sequence character = ICM.symbols.get(sequence, '') if character != '': print("Interpreted sequence " + sequence + " as the letter: " + character) letters.append(character) sequence = "" initialize_signal(32) return True else: print('Invalid sequence: ' + sequence + " (deleting current sequence)") sequence = "" initialize_signal(36) return False # Process word def process_word(): if process_letter(): word = ''.join(letters) letters[:] = [] if word == "AR": print("End of transmission. Here's your message: " + ' '.join(words)) print('\nClearing previous transmission. Start a new one now...\n') words[:] = [] else: words.append(word) # Interpret button click (press/release) as dot, dash or unrecognized def interpret_input(): global sequence delta = calc_delta_in_sec(press, release) if (delta >= (BASE_TIME_SECONDS - TOLERANCE)) and (delta <= (BASE_TIME_SECONDS + TOLERANCE)): sequence += '.' print(str(delta) + " : Added dot to sequence: " + sequence) initialize_signal(32) elif (delta >= ((BASE_TIME_SECONDS * 3) - TOLERANCE)) and (delta <= ((BASE_TIME_SECONDS * 3) + TOLERANCE)): sequence += '-' print(str(delta) + " : Added dash to sequence: " + sequence) initialize_signal(32) else: print(str(delta) + " : Unrecognized input!") initialize_signal(36) def calc_delta_in_sec(time1, time2): delta = time2 - time1 return delta.seconds + (delta.microseconds / 1000000.0) try: initialize_gpio() initialize_metronome() message = raw_input("\nPress any key to exit.\n") finally: GPIO.cleanup() print("Goodbye!")
Hopefully most of this is self-explanatory, maybe with a little bit of studying the code. I’ll address a few points though. If you have questions about the rest of it, leave a comment and I’ll try to clarify it.
Metronome
If you’re unfamiliar with a metronome, it’s just a device that makes a regular beat or sound, to mark rhythm. I added an LED that blinks one cycle (on and off) per “base time”. All the code does is look at the current state of the LED (on or off), and flips it.
GPIO.output(11, not GPIO.input(11))
The
t.daemon piece causes it stop running the thread when the program ends. Otherwise, the light just keeps on blinking!
Success or Failure
The
signal_to_user method simply takes a pin and turns it on and off a few times, rapidly. That gives us the flashing green and red LEDs.
Note that both of these methods run in a separate thread, so as not to freeze up the main thread that our program is running on. You can read more about threading in Python here.
Detecting Button Clicks
The only thing we’re interested in is when a button was pressed and is then released, or vice versa. It’s possible, even with a pulldown resistor, to occasionally detect two presses or two releases in a row. Some of that is even due to the button itself… I’ve seen the duplicate events more often when I don’t push the button as forcefully, probably causing something inside to float between connected and disconnected a few times really quickly.
You can apply a “bouncetime” when you setup the pin, which tells it to ignore duplicate button presses that are really close together. But I preferred to just detect it and correct it myself, which is what I’m doing in
intercept_morse_code with the
last_edge stuff.
More Resources
Courses
If you’re interested, Coursera has a series of courses on Python, two of which are called Getting Started with Python and Python Data Structures. There’s another course I’ve started working through too, called Interfacing with the Raspberry Pi.
Reference Card
Here’s a reference card from toptechboy.com that shows what the pins do on the Pi. If you don’t have a cobbler that plugs into your breadboard, and you have to wire up individual GPIO pins on the Pi to your breadboard, you’ll want to keep this handy.
.
<img src="" width=300">
Helpful Links
- Simple Guide to the RPi GPIO Header and Pins
- International Morse Code – Sounds
- International Morse Code – Dots and Dashes
- An Introduction to Python Lists
- Manage Concurrent Threads (Python)
Final Thoughts
This project led me down the path of detecting the pin edges (whether the button is pressed or not, 1 or 0, on or off, high or low), and other pin-related concepts like bounce time. I wrote more about what I learned.
Quick note about buttons. When you use one on your breadboard for the first time, it might feel like it only goes so far. Be sure to give it a good firm push so it’s flush with the breadboard, otherwise it won’t come into contact like it should. Mine looked like it was in at first, but wasn’t registering clicks very well. | https://grantwinney.com/generating-morse-code-on-the-raspberry-pi-using-a-button-on-a-breadboard/ | CC-MAIN-2019-18 | refinedweb | 1,430 | 65.42 |
From: Emil Dotchevski (emildotchevski_at_[hidden])
Date: 2006-05-04 20:44:00
Hello,
I am using boost-build v2 to build my own code, and as a side bonus I can
refer to boost libs from my jamfiles, and they get built and linked
automatically.
But I have to admit I am very confused about projects. Can someone help me
understand this: if I have a jamfile which defines a project and a lib
target, what is the difference between specifying usage requirements through
the usage-requirements attribute of the project vs. specifying them in the
lib target?
Going through the boost-build documentation didn't help so I decided to look
at some jamfile.v2 files in boost. I'm puzzled by the content of jamfile.v2
for boost.thread:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
import os ;
if [ os.name ] = NT
{
reqts = <link>shared:<define>BOOST_THREAD_BUILD_DLL=1 ;
}
else
{
# Declare the uses system library
lib pthread : : <name>pthread ;
usage = <library>pthread ;
}
project boost/thread
: source-location ../src
: usage-requirements $(usage)
: requirements $(reqts) <threading>multi
: default-build <threading>multi
;
CPP_SOURCES = condition mutex recursive_mutex thread xtime once
exceptions barrier tss tss_hooks tss_dll tss_pe ;
lib boost_thread
: $(CPP_SOURCES).cpp
;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ideally I want any of my projects that ends up linking boost_thread to have
been built with <threading>multi without me having to specify it explicityl
in my requirements. But it doesn't seem to work.
In fact I am so confused by projects that I don't understand how am I
supposed to refer to boost.thread in my jamfiles! Do I use /boost/thread?
Or, do I have to refer to the boost_thread target explicitly like so:
/boost/thread//boost_thread. It seems that either way boost.thread is built
and linked, but the referring projects don't get built with
<threading>multi -- unless I explicitly put it in their requirements. And
even this is confusing: do I have to add <threading>multi to the project's
requirements, or to the lib target found in the same jamfile?
Please help, my head hurts from trying to understand this stuff!
Thanks,
Emil
Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/boost-build/2006/05/13739.php | CC-MAIN-2019-47 | refinedweb | 370 | 67.35 |
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.11) Gecko/20071222 Remi/2.0.0.11-1 Firefox/2.0.0.11
Build Identifier: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.11) Gecko/20071222 Remi/2.0.0.11-1 Firefox/2.0.0.11
Javascript event handlers registered with addEventListener are not disabled when either javascript.enabled or docShell.allowJavascript are toggled. Other types of event handlers, such as those registered via the DOM event handler attributes are properly canceled.
Reproducible: Always
Steps to Reproduce:
1. open about:config
2. Visit
3. Click on the page
4. Toggle javascript.enabled to false
5. Click on the firefox bug page again. Observe that some event handlers have been disabled, others have not
Actual Results:
While some types of event listeners are blocked, other event listeners still fire.
Expected Results:
No client-side javascript code should execute at all.
This bug is critical to Tor security. The Torbutton extension () relies on the ability to prevent pages from executing Javascript and other dynamic content once Tor has been disabled, to prevent pages from performing network activity after the user has changed Tor state.
Confirming bug.
Have you considered adapting the "session restore" functionality and simply closing existing pages, then resurrecting the user's state when they turn off the Tor button? Even if you kill off the remaining event listeners for existing pages I'd still be nervous about letting Tor users interact with them.
What we could probably do is make AutoScriptEvaluate::StartEvaluating return an error if script can't be executed on that cx with the relevant principal. That somewhat relies on the right cx being used, but we certainly do that with the cx pusher for event listeners.
Unfortunately, other callbacks into JS may not be so lucky (XMLHttpRequest readystate handlers come to mind, though there could also be issues in any DOM API that allows passing in page-implemented callbacks and might call them async... Or even that doesn't return for a long time (e.g. one of the callbacks does sync XMLHttpRequests or some such). With this last, for example, treewalker's nodefilters could be attacked. Or the namespace resolver from the proposed DOM API for Selectors. Or any of a number of other callbacks. I guess in the long-running situation the calling cx will still be on the stack, so things might be ok and we only need to worry about propagating the right cx into async callbacks...
That would only affect the docshell setting and per-site settings, though. If we fail to get the right cx but are really async, we'll get the safe context and that's affected by the general "turn off JS" pref. So at least as a first approximation doing the checks in StartEvaluating would work.
That said, nsXPCWrappedJSClass::CallQueryInterfaceOnJSObject does an OBJ_GET_PROPERTY before calling StartEvaluating. That doesn't seem safe... brendan, am I missing something there?
Requesting blocking, but I'm not completely sure it should, to be honest...
Not blocking as this is how we've always behaved and it's late in the release cycle. But wanted as it would be nice to have if someone came up with a safe fix.
Would like to point out that many people do depend on this functionality outside of Tor users. Users of Javascript toggling extensions such as QuickJava come to mind as well.
BZ or Jonas, do you have anyone in mind who could work on this?
Smaug might be a good candidate.
Or jst maybe. Jst: is there any way we can disable this on a javascript level? So that no matter which entry points we miss (what about timers) it'll still be blocked?
Any progress?
any news during the last few years?
It's a pretty low priority. I suspect that in the end someone who actually cares about dynamically enabling/disabling JS while a site is live will need to step up and fix this. I'm happy to point to relevant code as needed.
docShell.allowJavascript = false;
doesn't prevent JavaScript in the onload="" attribute of images either. This doesn't just affect websites, but XUL <editor>s as well. (ScribeFire, in particular.)
I've filed a separate bug for this, but it might be better included in this bug. I hope someone will fix it, because it greatly impacts major features of the <editor> element, including image drag and drop.
Yeah, that's different from this bug, since comment 0 explicitly says that attribute on* handlers _are_ prevented in the case this bug is filed about.
It seems that jsdIStackFrame.executionContext.scriptsEnabled may also be a victim.
We added this code to Firebug 1.5:
context.eventSuppressor = context.window.getInterface(Ci.nsIDOMWindowUtils);
if (context.eventSuppressor)
context.eventSuppressor.suppressEventHandling(true);
in addition to
executionContext.scriptsEnabled = false;
when we stop on a breakpoint.
This immediately solved one bug and we've not seen bad side effects. The only funny business was puzzing: if we call context.window.getInterface() after returning from the debuggers nested event loop, it fails. That is why we save the interface ref in context.eventSuppressor so we can use that to free the window after we return.
(In reply to comment #15)
> We added this code to Firebug 1.5:
> context.eventSuppressor = context.window.getInterface(Ci.nsIDOMWindowUtils);
> if (context.eventSuppressor)
> context.eventSuppressor.suppressEventHandling(true);
John, I've tried doing this call on the contentWindow attribute of the relevant tabbrowser.docShell attribute, but it is disabling all hotkeys in the entire browser window. For example, control-t will not cause that browser window to open a new tab if the tab itself is the currently selected widget (as opposed to the URL bar, for example).
Did you notice this side effect as well? If not, which window are you actually calling suppressEventHandling on?
(In reply to comment #16)
> ...but it is disabling all hotkeys in the entire
> browser window.
...
> Did you notice this side effect as well?
(Not that this is related to this bug, but) preventing all hotkeys if the focused windows has events suppressed is by design, not a side effect.
Mike, mozilla.dev.platform is a great place to ask such questions. And I'll confirm what Olli said as what we also see.
My apologies. I was just trying to keep some historical record for other people who hit this bug and try the workaround, since it is going on 3 years old, and I've long since lost any hope for an actual fix :)
I will try to investigate more and if needed continue workaround discussion somewhere else...
Status needs to be changed to Confirmed.
Not blocking 1.9.3 on this bug.
The Tor Project / Electronic Frontier Foundation is paying to have this bug fixed.
"If you know C++ and/or Firefox internals, we should be able to pay you for your time to address these issues and shepherd the relevant patches through Mozilla's review process."
Source:
The original test case no longer works. But if it did we could see if Firebug 1.7 can block the events. I guess yes, we use nsIDOMWindowUtils suppressEventHandling(true).
Created attachment 651688 [details]
Minimal testcase
Since the original test case has disappeared, I took the liberty of making a new one.
Note that this isn't just a privacy issue, it's also a performance issue: sometimes you need to disable Javascript to prevent the browser from slowing to a crawl. The current bug means that it's never possible for the user to completely disable Javascript when it was previously enabled, unless he reloads ALL open pages.
*** Bug 812056 has been marked as a duplicate of this bug. ***
*** Bug 652049 has been marked as a duplicate of this bug. ***
Fixed by bug 862627. | https://bugzilla.mozilla.org/show_bug.cgi?id=409737 | CC-MAIN-2017-04 | refinedweb | 1,314 | 67.04 |
Marble/Runners/Search
< Marble(Redirected from Projects/Marble/Runners/Search)
Search
Searching for cities, addresses, points of interest
Marble uses so-called runners to calculate routes, do reverse geocoding, parse files and search for placemarks (cities, addresses, points of interest, ...). This tutorial shows how to use the MarbleRunnerManager class to search for an arbitrary string (Karlsruhe in the example below, see Userbase for more information on search terms).
#include <QtGui/QApplication> #include <QtCore/QDebug> #include <marble/MarbleWidget.h> #include <marble/MarbleModel.h> #include <marble/SearchRunnerManager.h> #include <marble/GeoDataPlacemark.h> using namespace Marble; int main(int argc, char** argv) { QApplication app(argc,argv); MarbleModel *model = new MarbleModel; SearchRunnerManager* manager = new SearchRunnerManager( model ); QVector<GeoDataPlacemark*> searchResult = manager->searchPlacemarks( "Karlsruhe" ); foreach( GeoDataPlacemark* placemark, searchResult ) { qDebug() << "Found " << placemark->name() << "at" << placemark->coordinate().toString(); } }:
Found "Karlsruhe, Germany" at " 8° 33' 48.7"E, 49° 05' 38.4"N" Found "Karlsruhe, McLean" at "100° 36' 58.4"W, 48° 05' 27.7"N" Found "Karlsruhe, Karlsruhe, Stadt" at " 8° 24' 16.0"E, 49° 00' 50.6"N" Found "Karlsruhe, Remscheid" at " 7° 17' 35.3"E, 51° 09' 09.4"N" Found "Karlsruhe, Austria" at " 15° 19' 53.6"E, 47° 21' 33.4"N" Found "Karlsruhe, McLean" at "100° 37' 13.5"W, 48° 05' 24.0"N" Found "Karlsruhe, Sohland a.d. Spree" at " 14° 27' 36.5"E, 51° 02' 28.5"N" Found "Parkstraße, Bad Elster" at " 12° 14' 08.0"E, 50° 16' 57.9"N" Found "Karlsruhe (Bruchsal)" at " 8° 33' 48.7"E, 49° 05' 38.4"N" Found "Karlsruhe (Innenstadt-West)" at " 8° 24' 16.0"E, 49° 00' 50.6"N"
The latest source code of this example can be found here.
>. | https://techbase.kde.org/Projects/Marble/Runners/Search | CC-MAIN-2019-35 | refinedweb | 287 | 55.3 |
l
bzr branch lp:pyopengl-demo
then add the pyopengl/OpenGL directory to your PYTHONPATH. You can also
download the source distributions from:L
easy_install PyOpenGL-Demo
You can then run the scripts in the PyOpenGL-Demo package..
Enjoy yourselves,
Mike
--
________________________________________________
Mike C. Fletcher
Designer, VR Plumber, Coder
wen heping wrote:
> Hi,
>
> In file OpenGL/__init__.py there is:
> PlatformPlugin( 'posix ', 'OpenGL.platform.glx.GLXPlatform' )
> Should it be:
> PlatformPlugin( 'posix', 'OpenGL.platform.glx.GLXPlatform' )
Yup, it should have been. I've updated it locally. I'm guessing no-one
with a non-linux posix machine has used PyOpenGL 3.x
Thanks,
Mike
--
________________________________________________
Mike C. Fletcher
Designer, VR Plumber, Coder
I didn't know that would be a problem. That completely fixes it
though--thanks!
Ian Mallett wrote:
>)
Wouldn't this have created a double, rather than a float data-set, does
on my machine:
>>> n = numpy.dstack( numpy.mgrid[0:255,0:255,0:1])/float(254)
>>> n.dtype
dtype('float64')
HTH,
Mike
--
________________________________________________
Mike C. Fletcher
Designer, VR Plumber, Coder)
threedimensionalgrid =
threedimensionalgrid.reshape(self.size_squared,3)
self.vertex_vbo = vbo.VBO(threedimensionalgrid)
def draw(self):
glEnableClientState(GL_VERTEX_ARRAY)
self.vertex_vbo.bind()
glVertexPointerf(self.vertex_vbo)
glDrawArrays(GL_POINTS,0,self.size_squared)
glBindBuffer(GL_ARRAY_BUFFER,0)
glDisableClientState(GL_VERTEX_ARRAY)
def __del__(self):
self.vertex_vbo.delete()
When it is drawn, I get an incorrect result--the vertices are all over the
place in semi-ordered, but incomprehensible patterns. I really need it to
be a grid. Pointers?
Thanks,
Ian
Hi,
In file OpenGL/__init__.py there is:
PlatformPlugin( 'posix ', 'OpenGL.platform.glx.GLXPlatform' )
Should it be:
PlatformPlugin( 'posix', 'OpenGL.platform.glx.GLXPlatform' )
More detailed description please visit:
Regards,
wen
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/pyopengl/mailman/pyopengl-users/?viewmonth=200904&viewday=1 | CC-MAIN-2017-13 | refinedweb | 317 | 53.37 |
DSA_SIGN(3) OpenSSL DSA_SIGN(3)
DSA_sign, DSA_sign_setup, DSA_verify - DSA signatures
#include <openssl/dsa.h> int DSA_sign(int type, const unsigned char *dgst, int len, unsigned char *sigret, unsigned int *siglen, DSA *dsa); int DSA_sign_setup(DSA *dsa, BN_CTX *ctx, BIGNUM **kinvp, BIGNUM **rp); int DSA_verify(int type, const unsigned char *dgst, int len, unsigned char *sigbuf, int siglen, DSA *dsa);
DSA_sign() computes a digital signature on the len byte mes- sage digest dgst using the private key dsa and places its ASN.1 DER encoding at sigret. The length of the signature is places in *siglen. sigret must point to DSA_size(dsa) bytes of memory. DSA_sign_setup() may be used to precompute part of the sign- ing operation in case signature generation is time-critical. It expects dsa to contain DSA parameters. It places the precomputed values in newly allocated BIGNUMs at *kinvp and *rp, after freeing the old ones unless *kinvp and *rp are NULL. These values may be passed to DSA_sign() in dsa->kinv and dsa->r. ctx is a pre-allocated BN_CTX or NULL. DSA_verify() verifies that the signature sigbuf of size siglen matches a given message digest dgst of size len. dsa is the signer's public key. The type parameter is ignored. The PRNG must be seeded before DSA_sign() (or DSA_sign_setup()) is called.
DSA_sign() and DSA_sign_setup() return 1 on success, 0 on error. DSA_verify() returns 1 for a valid signature, 0 for an incorrect signature and -1 on error. The error codes can be obtained by ERR_get_error(3).
US Federal Information Processing Standard FIPS 186 (Digital Signature Standard, DSS), ANSI X9.30
dsa(3), ERR_get_error(3), rand(3), DSA_do_sign(3) MirOS BSD #10-current 2005-02-05 1 DSA_SIGN(3) OpenSSL DSA_SIGN(3)
DSA_sign() and DSA_verify() are available in all versions of SSLeay. DSA_sign_setup() was added in SSLeay 0.8.. | http://mirbsd.mirsolutions.de/htman/i386/man3/DSA_sign.htm | crawl-003 | refinedweb | 303 | 65.83 |
#include <wx/bmpbuttn.h>
A bitmap button is a control that contains a bitmap.
Notice that since wxWidgets 2.9.1 bitmap display is supported by the base wxButton class itself and the only tiny advantage of using this class is that it allows specifying the bitmap in its constructor, unlike wxButton. Please see the base class documentation for more information about images support in wxButton.
This class supports the following styles:
Note that the wxBU_EXACTFIT style supported by wxButton is not used by this class as bitmap buttons don't have any minimal standard size by default.
The following event handler macros redirect the events to member function handlers 'func' with prototypes like:
Event macros for events emitted by this class:
wxEVT_BUTTONevent, when the button is clicked.
Default ctor.
Constructor, creating and showing a button.
Button creation function for two-step creation.
For more details, see wxBitmapButton().
Helper function creating a standard-looking "Close" button.
To get the best results, platform-specific code may need to be used to create a small, title bar-like "Close" button. This function is provided to avoid the need to test for the current platform and creates the button with as native look as possible. | https://docs.wxwidgets.org/trunk/classwx_bitmap_button.html | CC-MAIN-2019-47 | refinedweb | 202 | 55.13 |
52 total views, 1 views today
Welcome to Scala series. In this tutorial, you will learn about Scala Curried or currying functions in detail.
Usually a method or a function can have N number of arguments.
Example
def add(a: Int, b: Int): Int = { a + b } add(1,3) Result : 4
The above function/method has two arguments a and b.
In Scala, a method/function can also have multiple parameters list which called as currying functions or curried functions.
def add(x: Int) = (y: Int) => { x + y }
The above function/method accepts two list of arguments x and y. The method should be invoked as below.
add(1)(3) Result : 4
def add2(x: Int) = (y: Int) => (z: Int) => (a: Int) => { x + y + z + a } add2(1)(2)(3)(4) Result : 10
def add3(x: Int) = (y: Int, z: Int) => (a: Int) => (b: Int) => { x + y + z + a + b } add3(1)(2,3)(4)(5) Result : 15
References : Scala Curried or currying functions documentation
I hope you like this mini tutorial and you were able to understand how Curried or currying functions works.
Thanks for reading and please give us a thumb up and comment below! | https://www.staticreference.com/scala-curried-or-currying-functions/ | CC-MAIN-2019-43 | refinedweb | 197 | 68.4 |
Vue Timeago: timeago component for Vue.js
vue-timeago
Time to check out a simple and small component for, you guessed it, time. Use the timeago component in Vue.js projects, to monitor elapsed time since a certain date, with i18n support. For all supported languages, see /locales, it's easy to add a new language support.
If you want to play around with it, check the Demo page. The author has disabled Vue-devtools usage on this page but you can try it in your own environment.
Example
To make use of this plugin, start by installing it in your project.
Install
yarn add vue-timeago
Import it into your project and choose a
locale as shown
import VueTimeago from 'vue-timeago' Vue.use(VueTimeago, { name: 'timeago', // component name, `timeago` by default locale: 'en-US', locales: { // you will need json-loader in webpack 1 'en-US': require('vue-timeago/locales/en-US.json') } })
The example provided is displaying 3 different simple usage instances
<!-- time is a dateString that can be parsed by Date.parse() --> <timeago :</timeago> <!-- Auto-update time every 60 seconds & time before this will not be converted = 60 --> <timeago :</timeago> <!-- custom locale --> <!-- use a different locale instead of the global config --> <timeago :</timeago>
The
since prop is a string value representing a date. The string should be in a format recognized by the
Date.parse() method.
So you can pass your own dates in a few different ways.
created() { // use the current time this.time = Date.now() }, data () { return { //or set a standard time time: 'Jul 9, 2017', } } }
More options are available at the API section.
If you want to check out the source code of this plugin or submit a request, head to its repository available here. Created by @egoist. | https://vuejsfeed.com/blog/vue-timeago-timeago-component-for-vue-js | CC-MAIN-2019-35 | refinedweb | 294 | 66.84 |
Issue
This is a long one, so let me explain. I’m trying to write a discord bot in python using oauth2 in flask. Here is what I am trying to achieve in pseudocode: 1: user sends command in channel, 2: the bot then sends an embed with a link to the user that contains the oauth2 authorization, 3: the user clicks on the oauth2 and authorizes which gives the program their email address linked to their discord, 4: that data is then saved as a variable to be used later, and a dm is sent to the user containing their email address. Sounds simple.
Due to discord.py being in 2.0 so I can use views and buttons and things I’m not using cogs as they were unreliable and finicky so this is all one big code. I do have flask and the discord bot running on separate threads (discord bot being on 1, and flask being on 2).
#imports import discord from discord.ext import commands from decouple import config import sqlite3 from flask import Flask, request, redirect, render_template import requests from waitress import serve import threading #discord oauth things'), intents=intents) async def on_ready(self): print(f'Logged in as {self.user} (ID: {self.user.id})') print('------') client = Bot() #all flask app = Flask(__name__) @app.route("/", methods = ["get"]) def index(): return render_template('index.html') @app.route("/login", methods = ["get"]) def login(): return redirect(DISCORD_LOGIN) @app.route("/success", methods = ["get"]) def success(): code = request.args.get("code") useraccesstoken = getaccesstoken(code) useremail = getuseremail(useraccesstoken) return render_template('success.html'), useremail def getaccesstoken(code): payload = { "client_id": CLIENT_ID, "client_secret": CLIENT_SECRET, "grant_type": "authorization_code", "code": code, "redirect_uri": REDIRECT_URI, "scope": SCOPE } headers = { "Content-Type": "application/x-www-form-urlencoded" } accesstoken = requests.post(url = DISCORD_TOKEN, data = payload, headers = headers) json = accesstoken.json() return json.get("access_token") def getuseremail(useraccesstoken): url = DISCORD_API+"/users/@me" headers = { "Authorization": f"Bearer {useraccesstoken}" } userdata = requests.get(url = url, headers = headers) userjson = userdata.json() return userjson.get("email") def web(): serve(app, host="127.0.0.1", port=5000) #command @client.command() async def getemail(ctx): firstmessageembed = discord.Embed(title = "Link your Plex and your Discord", color= discord.Color.from_rgb(160,131,196), description="🔗 Please click [HERE]() to get started.") firstmessageembed.set_author(name = ctx.message.author, icon_url = ctx.author.avatar.url) firstmessageembed.set_footer(text=f'You have 30 seconds to authorize.') await ctx.send(embed = firstmessageembed) await client.wait_for(????????????????) threading.Thread(target=web, daemon=True).start() client.run(CLIENTTOKEN)
As you can see I have no idea what I am waiting for and I have no idea how to know when the user has submitted the oauth2.
Solution
Okay so I did a lot of experimenting. Basically you will need to thread the discord bot and then use QUART to use await async to get the data from it. Because there is really no way to send data between, you need to use a database, store the data that you get from Quart into the database, then just access it from the discord bot when you need to.
This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 | https://errorsfixing.com/how-to-know-when-flask-has-received-data/ | CC-MAIN-2022-33 | refinedweb | 532 | 50.53 |
Individual Tax Accounting – 2011 edition Chapter 4 Homework Solutions 14. Tom must include in his gross income his share of the partnership’s income, regardless of whether the profits are actually distributed. Therefore, Tom must recognize as gross income from the partnership $120,000 ($300,000 × 40%) in 2010 and $320,000 ($800,000 × 40%) in 2011. pp. 4-18 and 4-19 23. Betty’s tax return for 2010 will reflect a loss on the annuity contract in 2010 because she collected on the policy for only 12 years when she was expected to collect for 20 years. The deductible loss is computed as follows: Cost of the policy $72,000 Annual payment $ 6,000 Expected number of payments (Table 4.1) × 20 Total expected return $120,000 Cost/Expected return ($72,000/$120,000) = Exclusion ratio percentage 60% Total payments received (12 × $6,000) $ 72,000 Basis recovery (Exclusion ratio × 12 payments received) (43,200) Unrecovered cost = loss allowed $28,800 pp. 4-28 to 4-31 54. a. Alice’s gross income from the excess coverage is computed as follows:
View Full
Document
This
preview
has intentionally blurred sections.
- Spring '10
- Staff
- Term life insurance, life insurance premiums, excess coverage, Kay b
Click to edit the document details | https://www.coursehero.com/file/6059177/chpt-4-HW-solutions/ | CC-MAIN-2017-30 | refinedweb | 211 | 52.39 |
------------------------------------------------------------------------------ -- | -- Module : Control.Concurrent.Chan.Split.Implementation -- Copyright : (c) 2012 Leon P Smith -- License : MIT -- -- Maintainer : leon@melding-monads.com -- ------------------------------------------------------------------------------ module Control.Concurrent.Chan.Split.Implementation where import Control.Concurrent.MVar import Control.Exception(mask_) import System.IO.Unsafe(unsafeInterleaveIO) type List a = MVar (Item a) data Item a = Item a !(List a) -- | @SendPorts@ represent one end of the channel. There is only one -- @SendPort@ per channel, though it can be used from multiple threads. -- Messages can be sent to the channel using 'send'. newtype SendPort a = SendPort (MVar (List a)) -- | type ReceivePort a = ReceivePort (MVar (List a)) -- | written to this channel before a reader is @'listen'ing@ -- will be eligible for garbage collection. newSendPort :: IO (SendPort a) newSendPort = SendPort `fmap` (newMVar =<< newEmptyMVar) -- | Create a new @ReceivePort@ attached the same channel as a given -- @SendPort@. This @ReceivePort@ starts out empty, and remains so -- until more elements are written to the @SendPort@. listen :: SendPort a -> IO (ReceivePort a) listen (SendPort a) = ReceivePort `fmap` withMVar a newMVar -- | Create a new @ReceivePort@ attached to the same channel as another -- @ReceivePort@. These two ports will receive the same messages. -- Any messages in the channel that have not been consumed by the -- existing port will also appear in the new port. duplicate :: ReceivePort a -> IO (ReceivePort a) duplicate (ReceivePort to a channel. This is asynchronous and does not block. send :: SendPort a -> a -> IO () send (SendPort s) a = do new_hole <- newEmptyMVar mask_ $ do old_hole <- takeMVar s putMVar old_hole (Item a new_hole) putMVar s new_hole -- | A right fold over a receiver, return (f a b) -- |. sendMany :: SendPort a -> [a] -> IO () sendMany _ [] = return () sendMany s (a:as) = do new_hole <- newEmptyMVar loop s (Item a new_hole) new_hole as where loop s msgs hole (a:as) = do new_hole <- newEmptyMVar putMVar hole (Item a new_hole) loop s msgs new_hole as loop (SendPort s) msgs new_hole [] = mask_ $ do hole <- takeMVar s putMVar hole msgs putMVar s new_hole -- |.) | http://hackage.haskell.org/package/split-channel-0.1.1.0/docs/src/Control-Concurrent-Chan-Split-Implementation.html | CC-MAIN-2015-48 | refinedweb | 316 | 55.74 |
The name of a class or namespace member or enumerator can be referred to after the :: scope resolution operator ([expr.prim]) applied to a nested-name-specifier that denotes its class, namespace, or enumer. [ Example:
class A { public: static int n; }; int main() { int A; A::n = 42; // OK A b; // ill-formed: A does not name a type }
— end example ]
[ Note: Multiply qualified names, such as N1::N2::N3::n, can be used to refer to members of nested classes ([class.nest]) or members of nested namespaces. — end note ]]; // and not to C::X C::arr[C::number];
— end example ]-directive ([namespace.qual]). The use of :: allows a global name to be referred to even if its identifier has been hidden.
A name prefixed by a nested-name-specifier that nominates an enumeration type shall represent an enumerator of that enumeration.
If a pseudo-destructor-name ([expr.pseudo]) contains a nested-name-specifier, the type-names are looked up as types in the scope designated by the nested-name-specifier. Similarly, in a qualified-id of the form:
nested-name-specifieropt class-name :: ~ class-name
the second class-name is looked up in the same scope as the first. [ }
— end example ] [ Note: [basic.lookup.classref] describes how name lookup proceeds after the . and -> operators. — end note ]
If.
If the nested-name-specifier of a qualified-id nominates a namespace (including the case where the nested-name-specifier is ::, i.e., nominating the global namespace), the name specified after the nested-name-specifier is looked up in the scope of the namespace. The names in a template-argument of a template-id are looked up in the context in which the entire postfix-expression occurs.
For a namespace X and name m, the namespace-qualified lookup set S(X,m) is defined as follows: Let S′(X,m) be the set of all declarations of m in X and the inline namespace set of X. If S′(X,m) is not empty, S(X,m) is S′(X,m); otherwise, S(X,m) is the union of S(Ni,m) for all namespaces Ni nominated by using-directives in X and its inline namespace set.
Given X::m (where X is a user-declared namespace), or given ::m (where X is the global namespace), if S(X,m) is the empty set, the program is ill-formed. Otherwise, if S(X,m) has exactly one member, or if the context of the reference is a using-declaration, S(X,m) is the required set of declarations of m. Otherwise if the use of m is not one that allows a unique declaration to be chosen from S(X,m), the program) }
— end example ]
[ Note: The same declaration found more than once is not an ambiguity (because it is still a unique declaration). [} }
— end example ] — end note ]
[ Example: Because each referenced namespace is searched at most once, the following ]
During the lookup of a qualified namespace member name, if the lookup finds more than one declaration of the member, and if one declaration introduces a class name or enumeration name and the other declarations either introduce the same variable, ] ] | https://timsong-cpp.github.io/cppwp/n4659/basic.lookup.qual | CC-MAIN-2021-17 | refinedweb | 528 | 50.36 |
HANA Daily Monitoring Template
The below blog intend to help HANA admin to have a sample template that they can use for monitoring there HANA environment on pro-active basis.
you all are invited to add you comments if you feel we should include some more steps in the template . Its intention is monitoring not resolution , each issue reported can be resolved separately .You can use this template also as pointer for what all should be monitor if you are monitoring you SAP HANA landscape via SAP Solution Manager.
1. Check first if all the services are running fine :-
2.Run Unique checker (you can schedule it in you crontab also ,so as to get updates automatically in your mail box.
This program helps you to find duplicate entries in tables . Reach out to SAP to get the program or refer to if you do not have it.
3.Check for CRASH dumps if Any :-
check it on admin console –>Performance –>to see the dumps (OOM dumps as well) give the serach text as “dump”
if you find any crash dump –>analyze if its because of any query –>notify the query owner to optimize it in case if its causing dumps.
4. check SLT – if any table has error status
No error so all is good.
5. Check LTR as well :
Also check the traditional T-codes ST22 and SM21 , it should not have any critical dumps .
6. clean up the garbage memory:-
frequency could be everyday or once in 3 days you can decide after seeing the pattern :
execute mm gc -f
It triggers the garbage collector and without unloading the tables it free up memory .
Remark – to execute mm gc -f you need to log in HANA server –> HDBAdmin.sh–>Services–>console –>select the node –> execute the command.
7. Validate Backup – Successful backup taken on **/**/** . Next Back Up on **/**/**.
Analyze if the backup failed and take action accordingly .
Hope this template helps you to keep you HANA environment healthy and running 🙂 . Happy Monitoring .
Please add any step you feel should be part of daily monitoring task .
Hi there!
Nice blog and thanks for sharing.
Few comments:
Point 2 (uniquechecker) and 6 (garbage cleanup) are definitively not required nor recommended to run on a daily basis.
Also, HDBAdmin is not supported for any use outside SAP HANA development.
Concerning the validate backup: full agreement, the backups need to be checked.
But simply checking if they ran without error doesn't cut it.
To actually validate backups a recovery on a separate instance is necessary - otherwise you never know if you could actually perform the recovery.
- Lars
hi Lards,
thanks for the input.
i agree with you these two take lot of toll and time, so it should not be ran everyday.
But as said (in my case also ) it depends upon your requirement especially for unique checker . If you see frequent table corruption , so give filter on that particular table/s and run for it only rather than running for all table.
regards,
vinaysingh
Singh,
when you see frequent table corruption, you don't need the uniqueChecker. You need new hardware (or upgrade to a newer revision...)
- Lars
Hi Lars,
Thanks for the valuable input,we are in process of doing the same ( as recommended by AGS to us) .
will update how it does after the Hardware upgrade.
regards,
vinay singh
Lars - to my other point - how can you really observe table corruption?
Regards,
Justin
Hi Justin
Corruption in data structures can be experienced in all sorts of ways:
For most cases SAP HANA should be able to figure out a corruption by itself and also "repair" the corrupted data by re-reading the last saved state from disk and re-applying all changes performed since (apply redo log).
- Lars
I'ver added hdbbackupcheck to my backup script. No substitute for actually restoring on another instance, but verifies consistency.
1869119 - Checking backups using hdbbackupcheck
Hi Jake
Can we include this command to the end of the backup script ?
Or we have to check backup for each backup files generated ?
Thanks
Rabi
Hi Lars/Vinay, I am just curious as to the garbage collector function. I searched all help documents and SCN and can't really find any documents that describe this in more detail. Does this "garbage" affect HANA globally or only specific tables? In what cases is "garbage" created?
Reason I am interested is that I was involved in a scenario last week where the performance on a few select tables degraded to a horrendous amount, whereas all other tables in the system were performing optimally. Even a SELECT COUNT(*) on the affected tables were taking upwards of 1 minute on 250 million records, where the same query on a 1.5 billion row table was 250ms. On checking the merge, column optimization, system load and all the "normal" methods to see what may be affecting performance I came up empty handed with no explanation. Miraculously at some later point, the specific tables starting performing normally, again - no action or explanation.
I am wondering if garbage collection could be an explanation or if there are other underlying "corruption" indicators to check for on specific tables.
Regards,
Justin
Hi Justin
sorry, too few details here to even make an educated guess.
"Garbage" data is all data we don't need any more. This includes old versions of data that once were current as well as temporary data and so forth.
The garbage collection works allocator and virtual file wise. E.g. LOB columns have their own separate memory handling.
All this happens automatically and typically no user interaction is required.
- Lars
Thanks Lars. I guess the main thing I wanted to know is if the so called garbage (if left piled up the alley), would affect performance globally or if it would be on an object by object basis.
Regards,
Justin
nice article
Thanks
Nice Article. Thanks a lot. I will follow the script as stated. Until I can learn better.
This is good as a base line to start.
Thanks again.
thanks Randy,
for reading and liking it .
regards,
vinaysingh
nice Article
Dear Vinay,
Very Nice info...Thank You....Kindly keep more blogs.
Regards,
V Srinivasan
Hello Vinay,
Very nice article. Thanks.
Need your help on one issue,
following point 2, I have scheduled unique checker in crontab in HANA server with sidadm user, but script fails with below error:
Traceback (most recent call last):
File "uniqueChecker.py", line 8, in <module>
from hdbcli import dbapi
File "/HANA/sapmnt/<SID>/exe/linuxx86_64/HDB_1.00.53.375657_1048054/python_support/hdbcli/dbapi.py", line 15, in <module>
import pyhdbcli
ImportError: No module named pyhdbcli.
However, I am able to run Unique checker manually on the server.
thanks Vinay .....very helpful info
Hi Vinay,
there's a useful OSS Note on which compliments this blog and subject:
1977584 - Technical Consistency Checks for SAP HANA Databases
Best regards,
Andy.
Thanks VInay it is very nice doc
Is there a way to schedule the garbage collection?
Garbage collection is triggered after a transaction is committed and also periodically (every hour by default). A transaction that is currently committing can be identified in the Threads tab (see System Performance
Analysis). The Thread type will be “SqlExecutor” and the Thread method “commit”.
The periodic garbage collection can be identified by Thread Type” MVCCGarbageCollector”.
Note that the periodic garbage collection interval can be configured in the indexserver.ini file transaction section with the parameter mvcc_aged_checker_timeout.
See SAP Note 2169283 for details about SAP HANA garbage collection including ways to trigger certain types of garbage collections. | https://blogs.sap.com/2014/04/05/hana-daily-monitoring-template/ | CC-MAIN-2022-05 | refinedweb | 1,268 | 64.1 |
>
i'm making an firefighter game where the goal is to extinguish all the fires in a building. i have managed to be able to shoot water walk and stuff. buy how do i make it that i need to shoot the particle(which i attach to an invisible cube) for 3 seconds it deletes and the fire stops.
thanks
I think what you are describing and looking for an answer to is the core of your game. How much of Unity have you studied? Are you familiar with scirpts? Can you write and understand scripts yourself?
not so much. me and some friends need to make it for a school project.
@artsdcs:Now iam working on same like project... so could u pls help me to complete this?
Answer by ptpaterson
·
Jun 02, 2012 at 08:13 PM
The easiest way would be to use the legacy particle system, so you can add collision detection to the particles.
Set up your particle emitters that will represent the fire hose object emitting water particles and all of the flammable objects emitting fire particles. The legacy system requires you to add a particle emitter, a particle animator, and a particle renderer to the objects (See the menu Components->Effects->Legacy Particles). It doesn't use a separate GameObject particle system.
Add the World Particle Collider component to the fire hose object and check the "Send Collision Message" box in the inspector. This will let you use the collision in a script.
Next you will need to add a script to the flammable objects you created. I know C# better than javascript so this is in C#. Add the OnParticleCOllision function to the script. In side here you can create a counter that tracks all of the particles that have hit and if it reaches a certain number than you can turn the fire emitter off. In the Update function you can decrease the counter. Think of it like the fires health bar.
public class Flamable : MonoBehaviour {
public float FireHealth = 50;
public float MaxFireHealth = 50;
public float HealthRegen = 5;
public bool IsOnFire = true;
void OnParticleCollision(GameObject other) {
if(IsOnFire) {
FireHealth -= 1.0f;
if (FireHealth <= 0) {
IsOnFire = false;
transform.GetComponent<ParticleEmitter>().emit = false;
// other things to do when fire goes out
}
}
}
void Update() {
if (IsOnFire) {
FireHealth += Time.deltaTime * HealthRegen;
if (FireHealth > MaxFireHealth) {
FireHealth = MaxFireHealth;
}
}
}
}
Take a look at the this question for some additional details for the particle system
thanks for the help but i get an error that says:
Assets/Flamable.cs(1,25): error CS0246: The type or namespace name `MonoBehaviour' could not be found. Are you missing a using directive or an assembly reference?
i'm probably doing something wrong.
the particles from the fire on a cube are just like you said but the water is a seperate particle emitter attached to a fire hoze that is attached to the camera.
i added your script to the flaming cube and when i press play i get this error.
hey man it works great i just added using
UnityEngine;
and it worked.
but could it also be possible to removie the object itselve together with the fire so that you cant go trough becouse of the collider untill you extinguished the fire?
and by the way thank you for your time.
Sorry I didn't get back in time to help but I'm glad you figured it out. This helped me learn a lot fire a gun in 2d game
2
Answers
2D Aiming and Shooting at the mouse
3
Answers
Problem with AIM to SHOOT script
1
Answer
aim down sights of gun
3
Answers
how do i stop automatic firing?
3
Answers | https://answers.unity.com/questions/261438/extinguish-fire.html | CC-MAIN-2019-22 | refinedweb | 615 | 72.05 |
One of the most significant code smells is having duplicate code. There are primarily 2 forms of code duplications
- Explicit – These are the ones where there is blatant copy paste of the code. Methods are repeated within classes and it is easy for the CPD tool of PMD to figure out that lines are copied thus leaving us red faced.
- Subtle – This is the more dangerous form of duplication in which, though differences appear at a syntactical level, however, the structure and the processing steps are quite the same.
One such form of subtle duplication is checking for null. You would have surely encountered a case like this, which we had in our application
[sourcecode language=”java”]
public void upgradeTheDepartment(UserDepartment department){
User headOfDepartment = getHeadOfDepartment(department);
if ( headOfDepartment != null)
{
SalaryStructure structure = headOfDepartment.getSalaryStructure();
}
…
…
}
[/sourcecode]
If you notice, in this case, the UserDepartment has a 1 to 0..1 relationship with the User.
In this simplistic code you would see that we check for user being not null and then on the basis of that do some useful work. Having this null check at one or two places is not an issue. However, it becomes an issue when you need to have this check at several places. This checking for null is a subtle form of code duplication which you should like to avoid. Such a duplication works against simple changes, such as fixes or improvements, and is in violation of the DRY principle.
How do we avoid this?
Wouldn’t it be easy if the code could read like
[sourcecode language=”java”]
public void upgradeTheDepartment(UserDepartment department){
User headOfDepartment = getHeadOfDepartment(department);
SalaryStructure structure = headOfDepartment.getSalaryStructure();
…
…
}
[/sourcecode]
Well, at least it looks easy to read without the null check everytime. But wouldn’t this code throw a NPE when the headOfDepartment does not exist.
Introduce the Null Object.
Provide something for nothing: A class that conforms to the interface required of the object reference, implementing all of its methods to do nothing or to return suitable default values. So in our case, we would create a Null User for head of department and always assign it to the department. Hence, whenever the department is created it would have code like this
[sourcecode language=”java”]
public class UserDepartment
{
public UserDepartment()
{
this.setHeadOfDepartment(new NullUser());
}
[/sourcecode]
Introducing a NULL OBJECT simplifies the client’s code by eliminating superfluous and repeated conditionals that are not part of a method’s core logic. Selection and variation are expressed through polymorphism and inheritance rather than procedural condition testing.
Ideally, you would like to introduce the Null Object as an implementation of the interface so that the variation can be easily expressed through polymorphism. In our example with the NullUser headOfDepartment, a call to the salary structure would return a blank salary structure.
The object relationship moves from being optional to mandatory, making the use of the relationship more uniform. If you notice that with implementation, there exists a mandatory relationship between the UserDepartment and User Objects.
Hence remember,
IF an object reference could potentially be null and this reference must be checked before every use and the result of a null check is to do nothing or assign a suitable default value THEN it is better to provide a class derived from the object reference’s type and implement all its methods to do nothing or provide default results and use an instance of this class whenever the object reference would have been null. This would not only help in removing the superfluous Null Checks but would also keep you away from the ever-increasing NPEs.
We write clean code. Even though we might be focussed on complex Enterprise Java and Cloud Computing, Inphina truly believes that great software is simple and clean.
8 thoughts on “Duplicate Code? Introduce Null Object3 min read”
neat tip, thanks!
Interesting thought. However I so a potential problem when using ORM. Mostly, You’ll want to save the UserDepartment with a potential NullUser.
Don’t have to do something like
if (userDepartment.getUser() instanceof NullUser)
remove the null user
?
Hi Andries, You bring an interesting point to the table. But if we have code like -> if (userDepartment.getUser() instanceof NullUser) remove the null user and if this check happens to be at more than a couple of places then we fall back into the same duplicate code trap again. I believe that storing the NullUser with the UserDepartment is not an issue. This would also be helpful when say you would like to retrieve the user department and get the headOfDepartment to do some processing. Now, since you would always NullUser returned with doNothing processing, it would not blow up the code.
Also if you noticed then we have made the mapping of UserDepartment and User 1:1 instead of 1:0..1, in that case too it would be prudent to save the NullUser with the UserDepartment. So I would store the NullUser with the UserDepartment with the ORM mapping too.
The implication of introducing stub entities into you database might be trickier than you first think.
When displaying a list of all users, with the headOfDepartment next to it, will again need an “instance of NullUser” check since you’d want to display nothing when it’s a NullUser.
About the ORM: I’ve thought about it, and if your are using Hibernate, it would be possible to write an UserType that returns NullUser instead of null when no user is found.
Have you finished a project with this approach?
> When displaying a list of all users, with the headOfDepartment next to it, will again need an “instance of NullUser” check since you’d want to display nothing when it’s a NullUser.
No actually not, you would call the methods on the NullObject as though it were the actual Not-Null object. The method implementation would be such that you do nothing. I hope that clears that you would not have to check the instance at all.
We have used this approach on our SaaS project on Google App Engine. We used JPA for abstraction to the datastore. | https://blog.knoldus.com/duplicate-code-introduce-null-object/?shared=email&msg=fail | CC-MAIN-2019-43 | refinedweb | 1,021 | 53.81 |
TI Home
»
TI E2E Community
»
Support Forums
»
ARM® Processors
»
Sitara™ Processors
»
Sitara Processors Forum
»
PRU debugging on Linux - pView whereabouts?
Hi,I'm developing a high-speed PRU stepper generator native on Linux.I notice the PASM manual refers to a debugging tool called 'pView' - is that available?A minimal debugging capability on-system (inspect/modify registers/memory, break, step etc) would be extremely valuable - the CCS tool is a bit heavy for basic debugging tasks, and I'm unsure if it is actually suited to debug a PRU application like demoed in I notice that a seemingly half-finished PRU debugger is at - anybody working on that?thanks in advanceMichael
If you're looking for a PRU debugger, you might want to try out prudebug ().
Hi Steven
thanks for the hint
actually got it to work on a beaglebone black after changing prudbg.h like so:
// configuration defines//#define PRUSS_START 0x01C30000#define PRUSS_START 0x4a300000#define PRUSS_LEN 0x10000
- Michael
Hi Michael,
No problem. I'm glad you were able to get it working on the Beaglebone Black. Hopefully you'll find it useful. If not, please let me know what was missing or needs improvement.
Steve
Michael, Steven
I have been trying to make is work on the Beaglebone Black but so far have been unsuccessful.
I tried just as Michael did with modifying the PRUSS_START address. I did the following steps:
prudebug <cr>
pru 0
L 0x34000 prucode.bin
GSS
but after I hit a key, it stops at 0x0000 PC and even after doing SS, it never increments to the next address.
I then noticed that there were other registers in the prudbg.h file that were set differently than the BBB PRU address map.
so I changed the following as well
// register offsets#define PRU_CTRL_REG 0x0000#define PRU_STATUS_REG 0x0004#define PRU_INTGPR_REG 0x0020
// sub-block base address (two address, one for each PRU)#define PRU_INST_BASE {0x34000, 0x38000}#define PRU_CTRL_BASE {0x22000, 0x24000}#define PRU_DATA_BASE {0x0000, 0x2000}
then I reran prudebug
L 0x34000 led_driver.bin
I tried to do a DIS but I got the following
[0x0000] 0x00000000 ADD R0.b0, R0.b0, R0.b0[0x0001] 0x00000000 ADD R0.b0, R0.b0, R0.b0[0x0002] 0x00000000 ADD R0.b0, R0.b0, R0.b0[0x0003] 0x00000000 ADD R0.b0, R0.b0, R0.b0[0x0004] 0x00000000 ADD R0.b0, R0.b0, R0.b0[0x0005] 0x00000000 ADD R0.b0, R0.b0, R0.b0[0x0006] 0x00000000 ADD R0.b0, R0.b0, R0.b0[0x0007] 0x00000000 ADD R0.b0, R0.b0, R0.b0[0x0008] 0x00000000 ADD R0.b0, R0.b0, R0.b0[0x0009] 0x00000000 ADD R0.b0, R0.b0, R0.b0[0x000a] 0x00000000 ADD R0.b0, R0.b0, R0.b0[0x000b] 0x00000000 ADD R0.b0, R0.b0, R0.b0[0x000c] 0x00000000 ADD R0.b0, R0.b0, R0.b0[0x000d] 0x00000000 ADD R0.b0, R0.b0, R0.b0[0x000e] 0x00000000 ADD R0.b0, R0.b0, R0.b0[0x000f] 0x00000000 ADD R0.b0, R0.b0, R0.b0
as if nothing was loaded in Instruction Memory
I then run gss and I got a segmentation fault.
Can anyone of you help ?
Thank you
As L seems to take an offset of the instruction memory base, I also tried with
L 0x0 prucode.bin but got the same results
Hi Chris,
I admit I did not test that change extensively, primarily because we already have a symbolic debugger for PRU code on the LinuxCNC environment (see post on debugging near the bottom of)
can you post the source led_driver.p so we can reproduce?
Actually, it does it with any pru file. I tried with the following led_blink.bin/p but it does not work either
//LOOP: MOV r2, 1<<21 MOV r3, GPIO1 | GPIO_SETDATAOUT SBBO r2, r3, 0, 4
MOV r0, 0x00f00000DEL1: SUB r0, r0, 1 QBNE DEL1, r0, 0
MOV R2, 1<<21 MOV r3, GPIO1 | GPIO_CLEARDATAOUT SBBO r2, r3, 0, 4
MOV r0, 0x00f00000DEL
I didn't do any testing on the AM335x processor. While I don't have a Beaglebone Black board, I do have a AM3358 based board. I won't have time to look at it until tonight but will look into the issue and let you know later today.
Christian Joly
// Send notification to Host for program completion MOV r31.b0, PRU0_ARM_INTERRUPT+16
where does this interrupt arrive? do you have a matching C program on the ARM cpu which uses prussdrv? can you show that?
maybe a better way is to get the examples from working and only attach to the already working program
that would take out a few moving parts
Michael
The example works fine on the BBB and yes, I have the matching C program. It is an example that I extracted from the web. Here is the main routine
int main (void){ unsigned int ret; int d; tpruss_intc_initdata pruss_intc_initdata = PRUSS_INTC_INITDATA; printf("\nINFO: Starting %s example.\r\n", "PRU_memAccess_DDR_PRUsharedRAM");
/* Initialize the PRU */ prussdrv_init (); /* Open PRU Interrupt */ ret = prussdrv_open(PRU_EVTOUT_0); if (ret) { printf("prussdrv_open open failed\n"); return (ret); } /* Get the interrupt initialized */ prussdrv_pruintc_init(&pruss_intc_initdata); prussdrv_pru_reset(0);
/* Initialize example */ printf("\tINFO: Initializing example.\r\n"); LOCAL_exampleInit(PRU_NUM); /* Execute example on PRU */ printf("\tINFO: Executing example.\r\n"); prussdrv_exec_program (PRU_NUM, "./prucode.bin");
/* Wait until PRU0 has finished execution */ printf("\tINFO: Waiting for HALT command.\r\n"); prussdrv_pru_wait_event (PRU_EVTOUT_0); printf("\tINFO: PRU completed transfer.\r\n"); prussdrv_pru_clear_event (PRU0_ARM_INTERRUPT);
/* Check if example passed */ if ( LOCAL_examplePassed(PRU_NUM) ) { printf("Example executed succesfully.\r\n"); } else { printf("Example failed.\r\n"); }
/* Disable PRU and close memory mapping*/ prussdrv_pru_disable(PRU_NUM); prussdrv_exit (); munmap(ddrMem, 0x0FFFFFFF); close(mem_fd);
return(0);}
You mentioned you are using linuxcnc. What environment is it ?
what kernel version are you using? we've switched to the 3.8 series so a device tree overlay needs to be loaded in that case
--
LinuxCNC is a open-source CNC package. See .
The starting point for the latest and greatest on the Beaglebone ARM port is here:
for more detail, see the emc-developers mailing list archive
I am using Debian Wheezy. I have updated the kernel to get the overlay working so the PRU is working fine and I can run code on it.
I just would like to also have debugging capability and I though prudebug would do the trick but it looks like it is not fully compatible with the BB Black yet
Wheezy is a distro; could you post the the output of 'uname -a' and the top lines of 'dmesg'?
welcome to open source - it's ToolTime again - adapt the tools and only then start digging ;)
-m
Linux arm 3.8.13-bone18.1 #4 SMP Sat Jun 1 16:49:02 PDT 2013 armv7l GNU/Linux
[ 0.000000] Booting Linux on physical CPU 0x0[ 0.000000] Initializing cgroup subsys cpu[ 0.000000] Linux version 3.8.13-bone18.1 (christy@christy-desktop) (gcc version 4.7.3 20130226 (prerelease) (crosstool-NG linaro-1.13.1-4.7-2013.03-20130313 - Linaro GCC 2013.03) ) #4 SMP Sat Jun 1 16:49:02 PDT 2013[0a50740, node_mem_map c0aca000[ 0.000000] Normal zone: 1024 pages used for memmap[ 0.000000] Normal zone: 0 pages reserved[ 0.000000] Normal zone: 129792 pages, LIFO batch:31[ 0.000000] AM335X ES1.0 (neon )[ 0.000000] PERCPU: Embedded 9 pages/cpu @c0ed9000 s14080 r8192 d14592 u36864[ 0.000000] pcpu-alloc: s14080 r8192 d14592 u36864 alloc=9*4096[ 0.000000] pcpu-alloc: [0] 0 [ 0.000000] Built 1 zonelists in Zone order, mobility grouping on. Total pages: 129792[ 0.000000] Kernel command line: console=ttyO0,115200n8 root=UUID=b14a0a23-1b38-42cb-9337-9799fc8980ee ro rootfstype=ext4 rootwait fixrtc[4048k/504048k available, 20240000000 - 0xbfe00000 ( 14 MB)[ 0.000000] .text : 0xc0008000 - 0xc08a8570 (8834 kB)[ 0.000000] .init : 0xc08a9000 - 0xc09d0700 (1182 kB)[ 0.000000] .data : 0xc09d2000 - 0xc0a535b0 ( 518 kB)[ 0.000000] .bss : 0xc0a535b0 - 0xc0ac9700 ( 473 kB)[ 0.000000] Hierarchical RCU implementation.[ 0.000000] RCU restricting CPUs from NR_CPUS=4 to nr_cpu_ids=1.[ 0.000000] NR_IRQS:16 nr_irqs:16 16[] OMAP clocksource: GPTIMER2 at 24000000 Hz[ 0.000000] Console: colour dummy device 80x30
Hi Steven,
I looked at the code and tried to make it work for the AM335X proc but I have run into a number of issues.
1 - I have updated the header file to match the processor memory map
2 - I have modified some of the code in prudbg.c as some offsets were hardcoded vi cmd.c
3 - I also modified cmd.c soft_reset function. I believe you wanted to do a bitwise not and not an arithmetic not. but I am getting a segmentation fault when I run the reset command. Here is the code. The segmentation fault happens when I try to set the control register with a new value with the bit 0 cleared.
void cmd_soft_reset(){ unsigned int ctrl_reg;
ctrl_reg = pru[pru_ctrl_base[pru_num] + PRU_CTRL_REG]; ctrl_reg &= ~PRU_REG_SOFT_RESET; printf("PRU%u 0x%08X 0x%08X\n", pru_num,pru_ctrl_base[pru_num],ctrl_reg ); pru[pru_ctrl_base[pru_num] + PRU_CTRL_REG] = ctrl_reg;
printf("PRU%u reset.\n", pru_num);}
Any idea what is happening ? Have you had time to look at the code ?
Thanks,
Christ. | http://e2e.ti.com/support/arm/sitara_arm/f/791/p/229812/937205.aspx | CC-MAIN-2014-42 | refinedweb | 1,501 | 67.55 |
#include <deal.II/grid/tria_accessor.h>
This class is a specialization of
TriaAccessor<structdim, dim, spacedim> for the case that
structdim is zero and
dim is one. This class represents vertices in a one-dimensional triangulation that is embedded in a space of dimensionality
spacedim (for
spacedim==dim==1 the triangulation represents a domain in \({\mathbb R}^\text{dim}\), for
spacedim>dim==1 the triangulation is of a manifold embedded in a higher dimensional space).
The current specialization of the TriaAccessor<0,dim,spacedim> class for vertices of a one-dimensional triangulation exists since in the
dim == 1 case vertices are also faces.
Definition at line 2242 of file tria_accessor.h.
Pointer to internal data.
Definition at line 2269 of file tria_accessor.h.
Whether the vertex represented here is at the left end of the domain, the right end, or in the interior.
Definition at line 2275 of file tria_accessor.h.
Constructor.
Since there is no mapping from vertices to cells, an accessor object for a point has no way to figure out whether it is at the boundary of the domain or not. Consequently, the second argument must be passed by the object that generates this accessor – e.g. a 1d cell that can figure out whether its left or right vertex are at the boundary.
The third argument is the global index of the vertex we point to.
Constructor. This constructor exists in order to maintain interface compatibility with the other accessor classes. However, it doesn't do anything useful here and so may not actually be called.
Constructor. Should never be called and thus produces an error.
Constructor. Should never be called and thus produces an error.
Copy operator. Since this is only called from iterators, do not return anything, since the iterator will return itself.
Return the state of the iterator. Since an iterator to points can not be incremented or decremented, its state remains constant, and in particular equal to IteratorState::valid.
Level of this object. Vertices have no level, so this function always returns zero.
Index of this object. Returns the global index of the vertex this object points to.
Return a reference to the triangulation which the object pointed to by this class belongs to.
This operator advances the iterator to the next element. For points, this operation is not defined, so you can't iterate over point iterators.
This operator moves the iterator to the previous element. For points, this operation is not defined, so you can't iterate over point iterators.
Compare for equality.
Compare for inequality.
Comparison operator for accessors. This operator is used when comparing iterators into objects of a triangulation, for example when putting them into a
std::map.
This operator simply compares the global index of the vertex the current object points to.
Return the global index of i-th vertex of the current object. If i is zero, this returns the index of the current point to which this object refers. Otherwise, it throws an exception.
Note that the returned value is only the index of the geometrical vertex. It has nothing to do with possible degrees of freedom associated with it. For this, see the
DoFAccessor::vertex_dof_index functions.
Return a reference to the
ith vertex. If i is zero, this returns a reference to the current point to which this object refers. Otherwise, it throws an exception.
Return the center of this object, which of course coincides with the location of the vertex this object refers to.
Pointer to the
ith line bounding this object. Will point to an invalid object.
Line index of the
ith line bounding this object.
Implemented only for
structdim>1, otherwise an exception generated.
Pointer to the
ith quad bounding this object.
Quad index of the
ith quad bounding this object.
Implemented only for
structdim>2, otherwise an exception generated.
Return whether this point is at the boundary of the one-dimensional triangulation we deal with here.
Return the boundary indicator of this object. The convention for one dimensional triangulations is that left end vertices (of each line segment from which the triangulation may be constructed) have boundary indicator zero, and right end vertices have boundary indicator one, unless explicitly set differently.
If the return value is the special value numbers::internal_face_boundary_id, then this object is in the interior of the domain.
Return a constant reference to the manifold object used for this object.
Return the manifold indicator of this object.
Always return false.
Always return false.
Always return false.
Always return false.
Test whether the object has children. Always false.
Return the number of immediate children of this object.This is always zero in dimension 0.
Compute and return the number of active descendants of this objects. Always zero.
Return the number of times that this object is refined. Always 0.
Return an invalid unsigned integer.
Return an invalid object.
Return an invalid object.
Always return no refinement.
Returns -1.
Returns -1.
Set the boundary indicator. The same applies as for the
boundary_id() function.
Set the manifold indicator of this vertex. This does nothing so far since manifolds are only used to refine and map objects, but vertices are not refined and the mapping is trivial. This function is here only to allow dimension independent programming.
Set the boundary indicator of this object and all of its lower- dimensional sub-objects. Since this object only represents a single vertex, there are no lower-dimensional object and this function is equivalent to calling set_boundary_id() with the same argument.
Return whether the vertex pointed to here is used.
Reference cell type of the current object.
Number of vertices.
Number of lines.
Return an object that can be thought of as an array containing all indices from zero to n_vertices().
Dimension of the space the object represented by this accessor lives in. For example, if this accessor represents a quad that is part of a two- dimensional surface in four-dimensional space, then this value is four.
Definition at line 2250 of file tria_accessor.h.
Dimensionality of the object that the thing represented by this accessor is part of. For example, if this accessor represents a line that is part of a hexahedron, then this value will be three.
Definition at line 2257 of file tria_accessor.h.
Dimensionality of the current object represented by this accessor. For example, if it is line (irrespective of whether it is part of a quad or hex, and what dimension we are in), then this value equals 1.
Definition at line 2264 of file tria_accessor.h.
Pointer to the triangulation we operate on.
Definition at line 2747 of file tria_accessor.h.
Whether this is a left end, right end, or interior vertex. This information is provided by the cell at the time of creation.
Definition at line 2753 of file tria_accessor.h.
The global vertex index of the vertex this object corresponds to.
Definition at line 2758 of file tria_accessor.h. | https://dealii.org/developer/doxygen/deal.II/classTriaAccessor_3_010_00_011_00_01spacedim_01_4.html | CC-MAIN-2021-10 | refinedweb | 1,155 | 51.34 |
We all love SQL right? No? Well sometimes a good SQL query is the best approach but most of the time it's the same CRUD operations that you need to carry out, Create, Read, Update, Delete, on each new entity. With an ORM, an Object Relational Mapper, you are able to define what the structure in your database should look like, using code. Additionally, you can use code for your queries as well. The main ORM for .Net and .Net Core is called Entity Framework and that's what we are covering in this article.
I just wanted to say a very important thing. A tool like an ORM should NEVER replace learning SQL. An ORM is there to make your life easier so when you end up writing SQL it's for the important things like a reporting query or a query that needs to be really performant. The idea is for the ORM to take care of simpler SQL like creating tables and doing simple inserts. If you use this without a decent knowledge of SQL then please have a look here and try grasp the basics first:
TLDR; This article is somewhat lengthy but it starts from the beginning to teach you Entity Framework and covers a lot of really great topics, worth the read.
In this article we will cover:
- WHY an ORM, we always need to ask ourselves why we use something. ORM can really shine if you have a lot of simple interaction to a database. You can really speed up your operation using it.
- WHAT it can help you with.
- Install and Set up
- A CRUD Demo. We will go through reading data, creating, updating and deleting data
Resources
Database providers
You can work with quite a number of different databases using Entity Framework. The whole idea is to have an agnostic approach so you, in theory, could replace one database for another and your code remains the same. We all know we almost never do that but it's a nice idea.
Beginner EF Core article
This article is partly based on this one even if we take it a step further
Using EF Core in ASP .Net MVC
Eager loading
We cover the basics of how this works in the article but there is always more to learn.
-
Overview page on everything EF Core
Why ORM
Using an ORM is about beeing faster, more productive and about knowing exactly what goes into a database.
So when do I use it, always or?
Well for most simple applications it's definitely good to use. For applications that need really performant queries you can definitely still use it but you need to be more observant on what SQL your ORM produces. Sometimes it's good enough and sometimes you need to write those queries by hand using SQL. Typically reporting queries is something I personally don't use ORMs for as they tend to be complex and hard to express in code. But everyone is different. I've seen even complex queries being authored in code.
The ORM landscape
There is more than one ORM choice for .Net. Entity Framework is the most known one but there are other ones. You have to decide which one fits your project.
Linq 2 db
Offers a similar experience to Entity Framework if you look at the syntax alone. Some say the syntax is close to what you get in actual SQL
Dapper
It has been descriptions like Object Mapper and Micro ORM
NHibernate
.Net port of Hibernate. One of the oldest ORMs out there.
There are more ORMs out there but the three above are well-known choices.
What
Most ORMs lets you define the table structure in code and you can map a class so that it corresponds to a table. The columns are simply properties on the class. Depending on the ORM several approaches are possible
- Schema first, in this scenario you define a schema of what tables you have, how they relate like 1-1, 1-Many, Man-to-Many and so on. You end up generating code from the schema.
- Code first, In this approach, you define the code first. Every table corresponds to a class and you can express how everything relates in code. Your ORM will then take a look at your code and generate structural SQL from it.
Migrations
A lot of ORMs comes with a concept called migrations. A migration is simply a piece of script that either alters the structure of the database or runs a piece of SQL that affects the data like for example seeding the database with some initial data. The idea is that every time you do a change of the database that should be a small transactional change captured in a migration. That migration can then be applied to the database and thereby the database will be altered in the desired way. For example, adding a Customer table to database would be a migration that when applied would create the table in the Database. A Migration can either be expressed as SQL or in Code.
Install and Set up
To get started with Entity Framework we need a couple of NuGet packages but also a project that we can install the NuGet packages to. So for this exercise, we will do the following:
- Create a solution
- Scaffold a Console project and add a reference to the solution
- Install the needed NuGet packages to the Console project
Create a solution
This is quite simply done. First, we need a directory. So create a directory, you can choose the name yourself but here is an example.
mkdir demo
Then we need to place ourselves in the directory like so:
cd demo
Scaffold a Console project
Next up we need to create our Console project. Again you can choose the name but we go with
App. Type the following:
dotnet new console -o App
This will create a new project of type
console with name
App.
Lastly we add this project to the solution like so:
dotnet sln add App/App.csproj
Install and Set up
For this we will install the core library for Entity Framework but also support for the database type SqlLite. Note, there is support for different databases, have a look at the full list of supported databases here:
SqlLite is a very simple database that just stores structure and data in a file on your hard drive.
But I'm working with a real database, what about me, will I benefit from this article?
Yes, what we are showing is generic knowledge that is widely applicable regardless of database type.
Ok then let's first navigate into our Console app directory, like so:
cd App
Then install the needed NuGet libraries:
dotnet add package Microsoft.EntityFrameworkCore.Sqlite dotnet add package Microsoft.EntityFrameworkCore.Design
This will add references to your project. Open up
App.csproj and you should find something like this:
<ItemGroup> <PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="2.2.6" /> <PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="2.2.6" /> </ItemGroup>
Now we need to actually install the libraries, we do that with the following command:
dotnet restore
A CRUD Demo
We will show how to do the full CRUD, Create, Read, Update and Delete.
Here we will attempt the following:
- Create the database
- Create a migration that represents the structure of the database and then apply it to create the database
- Read from the database
- Write to the database
- Seed our database with initial data
Create the database
First off we need a Database so let's create one.
We will create a file called
Database.cs with the following content:
// Database.cs using Microsoft.EntityFrameworkCore; using System; using System.Collections.Generic; namespace App { public class DatabaseContext : DbContext { public DbSet<Product> Products { get; set; } public DbSet<OrderItem> OrderItems { get; set; } public DbSet<Order> Orders { get; set; } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlite("Data Source=database.db"); } } public class Order { public int OrderId { get; set; } public DateTime? Created { get; set; } public ICollection<OrderItem> Items { get; set; } } public class OrderItem { public int OrderItemId { get; set; } public int Quantity { get; set; } public virtual Product Product { get; set; } } public class Product { public int ProductId { get; set; } public double Price { get; set; } public string Description { get; set; } } }
As you can see from the above code we have the following classes:
- Order, this is a class representing orders.
- OrderItem, an Order has many OrderItems and each OrderItem has a
Quantityproperty and reference to a
Product
- Product, this represents the Product we are trying to order. It has information on it like
Priceand
Description.
Let's comment on some interesting constructs in the code.
1-Many
We are expressing a 1-Many relationship by the following construct on the
Order class:
public ICollection<OrderItem> Items { get; set; }
Above we are saying that we a list of of OrderItems on the Order.
Foreign key
We are also expressing another database concept namely Foreign key. In the
OrderItem entity we are saying that we have a reference to a Product. In code, we write this as:
public virtual Product Product { get; set; }
DbContext and DbSet
Let's also comment on first
DbContext. When we want a new Database we should inherit from this class like so:
public class DatabaseContext : DbContext
DbSet represents a table in a Database. It's a generic that takes a
type as a template argument, like so:
public DbSet<OrderItem> OrderItems { get; set; }
Create a migration
Now we have saved our file
Database.cs. It's time to create the database. To do that we need to do two things:
Generate a migration, this takes a snapshot of the current state of your code and diff this to any previous snapshot. If it doesn't have a previous snapshot, generating a migration will simply create the initial migration.
Apply the migration, this will run the migration. Depending on the content of the migration it will either, create a database, affect the database structure or alter the data.
Generate a migration
Let's create our migration with the following command:
dotnet ef migrations add InitialCreate
The last argument is the name of the migration and we can call it what we want but it's good to give it a descriptive name like
InitialCreate.
Running the command should give the following result in the terminal:
As you can see above it's nice enough to tell us how to undo what we just di with the command
ef migrations remove.
This created some files for us namely the following:
Above you can see that we got our migration
InitialCreate but that the name is being prepended by a timestamp. This is so Entity Framework knows what to run and in what order. We can also see that we have two versions of this file, a .cs and a
Designer.cs file. We only care about the first one. Let's have a look at it:
using System; using Microsoft.EntityFrameworkCore.Migrations; namespace App.Migrations { public partial class InitialCreate : Migration { protected override void Up(MigrationBuilder migrationBuilder) { migrationBuilder.CreateTable( name: "Orders", columns: table => new { OrderId = table.Column<int>(nullable: false) .Annotation("Sqlite:Autoincrement", true), Created = table.Column<DateTime>(nullable: true) }, constraints: table => { table.PrimaryKey("PK_Orders", x => x.OrderId); }); migrationBuilder.CreateTable( name: "Products", columns: table => new { ProductId = table.Column<int>(nullable: false) .Annotation("Sqlite:Autoincrement", true), Price = table.Column<double>(nullable: false), Description = table.Column<string>(nullable: true) }, constraints: table => { table.PrimaryKey("PK_Products", x => x.ProductId); }); migrationBuilder.CreateTable( name: "OrderItems", columns: table => new { OrderItemId = table.Column<int>(nullable: false) .Annotation("Sqlite:Autoincrement", true), Quantity = table.Column<int>(nullable: false), ProductId = table.Column<int>(nullable: true), OrderId = table.Column<int>(nullable: true) }, constraints: table => { table.PrimaryKey("PK_OrderItems", x => x.OrderItemId); table.ForeignKey( name: "FK_OrderItems_Orders_OrderId", column: x => x.OrderId, principalTable: "Orders", principalColumn: "OrderId", onDelete: ReferentialAction.Restrict); table.ForeignKey( name: "FK_OrderItems_Products_ProductId", column: x => x.ProductId, principalTable: "Products", principalColumn: "ProductId", onDelete: ReferentialAction.Restrict); }); migrationBuilder.CreateIndex( name: "IX_OrderItems_OrderId", table: "OrderItems", column: "OrderId"); migrationBuilder.CreateIndex( name: "IX_OrderItems_ProductId", table: "OrderItems", column: "ProductId"); } protected override void Down(MigrationBuilder migrationBuilder) { migrationBuilder.DropTable( name: "OrderItems"); migrationBuilder.DropTable( name: "Orders"); migrationBuilder.DropTable( name: "Products"); } } }
The first thing we see is that we inherit from the class
Migration. The second thing is that we have two methods
Up() and
Down().
Up() is run when we want to apply something.
Down() is run when we want to undo the migration. Looking at our
Up() method we can see that we invoke
CreateTable() once for each of the tables
Order,
OrderItem and
Product. We can also see that it defines all the Foreign keys needed. The
Down() method calls
DropTable() to undo our table creation.
Apply the Migration
Ok, we have a Migration, let's apply it. We do that with the following command:
dotnet ef database update
This will first create the database if needed and then apply the migration.
We can see in our file structure that we got a new file created
database.db. We can either use a SQlLite client or why not write some code to connect to it ? :)
Read from the database
Ok, now we want to see if we can connect to our database and maybe read out out some data. Open up
Program.cs and go to the method
Main() and add the following:
using (var db = new DatabaseContext()) { }
This will establish a connection to our database. To read from the database we only need to read from it like this:
using (var db = new DatabaseContext()) { var orders = db.Orders; foreach(var order in orders) { Console.WriteLine("Order: order.Created"); } }
Shall we try it out?
Ok, we got no orders :(.
Well, this is expected, we didn't put anything in the database. How bout we change that?
Write to the Database
Ok, we know how to connect to the Database. What about writing to it?
Well to be able to create an
Order, we need a little data first in the form of at least one
Product and one
OrderItem. If you want to save something to the database you need to call
db.SaveChanges().
We need to take all of this in steps cause there are some moving parts.
Creating a Product
First, we will create a
Product.
Let's add the following code:
using (var db = new DatabaseContext()) { var product = new Product(){ Price = 100, Description = "Movie" }; db.Products.Add(product); db.SaveChanges(); foreach(var p in db.Products) { Console.WriteLine("{0} {1} {2}", p.ProductId, p.Description, p.Price); } }
The above will create our
Product and by invoking
db.SaveChanges() we make sure to persist it to the database.
Running the code leads to
OrderItem
Ok, that bit works. What about creating an
OrderItem? Well that's just as easy, we just need the following code:
using (var db = new DatabaseContext()) { var product = db.Products.SingleOrDefault(); if(product != null) { var item = new OrderItem { Quantity = 1, Product = product }; db.OrderItems.Add(item); db.SaveChanges(); Console.WriteLine("{0} {1} Product: {2}", item.OrderItemId, item.Quantity, item.Product.Description); } }
Let's try to highlight the important parts.
Above we can see that we first read out a
product from the database. The next thing we do is to assign that same product to the
Product property on the
OrderItem. Then we save it all by adding our
OrderItem to
db.OrderItems followed by calling
db.SaveChanges().
Create an Order
By now we have a
Product and an
OrderItem in the database. So how do we go about creating an Order containing those two entities?
Well creating an Order is not just creating an Order, it's creating an Order AND associate the OrderItem with the Order.
The association part can be done in two different ways:
- Add the OrderItem to
order.Items
- Add a foreign key to our OrderItem and assign our Order id.
Both of the above solutions require us to know a bit more about Entity Framework.
Load related entities
Let's start with the first approach. For that, we need to know how to load related entities.
Why?
Well, when you have an
Order instance its
Items will be
null unless we tell it explicitly to be filled with something. For this approach to work, we need it to be an empty list at least so we can add our
OrderItem.
Ok, think you better show me.
Sure, have a look at the following code below:
var item = db.OrderItems.SingleOrDefault(); var order = new Order() { Created = DateTime.Now }; db.Orders.Add(order); db.SaveChanges();
This creates an Order. What about adding our
item? Well, we have a problem:
Were we to attempt to our
item at after we save our
Order at row 49 our
order.Items would be
null and we would get a runtime exception. To solve that we need to use the method
Include().
Include() takes a lambda where we need to point out what we want to load. In this case, we want to load the property
Items on our
Order.
Let's run this code:
At this point our
order.Items is an empty array and we can add our
OrderItem without the code crashing as you can see because we make it to line 54.
Add a foreign key to OrderItem
Behind the scenes, we have already gotten a foreign key on
OrderItem. We can see that if we open up our migration:
Our problem right now is that it doesn't exist as a property on our
OrderItem, so how do we solve that?
Well, we just added to the class definition:
Then because we have an existing Order that's associated with an OrderItem the following is actually populated
item.OrderId:
Had we wanted to make the connection between the Order and the OrderItem, and there already wasn't one, we could easily have done so with the following code:
using(var db = new DatabaseContext()) { var order = db.Orders.SingleOrDefault(); var item = db.OrderItems.SingleOrDefault(); item.OrderId = order.OrderId; db.SaveChanges(); }
Update
Updating is as easy as following the second creating scenario we did for an Order. That is read up an entity, set a property and call
db.SaveChanges(). Like so:
using(var db = new DatabaseContext()) { var item = db.OrderItems.SingleOrDefault(); item.Quantity++; db.SaveChanges(); }
Deletion
Deleting is as easy as removing something from a list. If we want to delete a
Product we just need to do the following:
using(var db = new DatabaseContext()) { var product = db.Products.SingleOrDefault(); db.Products.Remove(product); db.SaveChanges(); }
It should be noted that if your Product is part of an
OrderItem you would need to remove that connection first like so:
using(var db = new DatabaseContext()) { var item = db.OrderItems.Include(i => i.Product)SingleOrDefault(); item.Product = null; db.SaveChanges(); var product = db.Products.SingleOrDefault(); db.Products.Remove(product); db.SaveChanges(); }
Summary
This where we stop. We learned a ton in this article if we started from absolute zero
We learned:
- What an ORM is
- Define our database structure
- Create a migration and apply it
- Read data
- Create data
- Update data
- Delete data
- Load related entities
- Foreign keys
That's a lot for one article. Hopefully, you are now so interested that you want to learn more. Have a look at the Resources section to learn more if you want to learn more advanced concepts and about dealing with different kinds of databases.
Posted on Jun 29 by:
Chris Noring Cloud Developer Advocate at Microsoft, Google Developer Expert
Read Next
FREE 3 Hour Azure Fundamentals (AZ-900) Certification Course (100+ Videos!) 😱
Andrew Brown 🇨🇦 -
18 Programming YouTube Channels that you shouldn't miss
Tharun Shiv -
9 amazing React.js projects for beginners, that will help you to get hired
Duomly -
Discussion
Good article but how you started about diferent ORMs I was hoping you will try to mention some other than the Entity Framework like everyone else is doing. Maybe like github.com/linq2db/linq2db which is in some ways miles better than EF.
Hi Mladen. Did not know about that one. Will have a look, thanks :) I've added a list of links to other ORMs. I will need to make a separate article for it though
It would be great if you could make a separate article about Linq2DB as they desperately need some marketing. I use it in the production for years and just love it! If you need help just let me know.
Why aren't you authoring an article instead of asking others to do so?
I would like to,but since I'm a terrible writer it would be just a huge pile of you know what..
Totally disagree with statement "ORM is knowing what goes to the dababase" this is exactly the reason why many avoid using ORMs because you have zero visibility when you save objects in c# and ORM does its magic on how and when to save actual data.
maybe I didn't explain this well enough. I come from a background where no one knew what lived in a database. Things like functions, triggers, and other things had just been added over time. The way I see it, using an ORM is about defining your database in code and thereby you can have it under version control, at least the structure. I agree that ORMs are tricky. I have been struggling myself over the years with concepts such as tracking and having to write custom SQL cause what the ORM generated was just slow. I guess the alternative is stored procedures. The way I see it there are no silver bullets, just different types of problems. The speed you get initially becomes complexity later on.
The more I read, the more convinced I am, that a friend's joke about EF is totally true: People use EF, because they don't want to learn SQL and are afraid to use it.
As I wrote in my article. It's important to know what SQL EF generates and for reporting queries, for example, you need to write your own SQL. It's important to know when to use a tool and when to rely on SQL. An ORM is NOT a replacement for SQL, it just abstracts away basic SQL.
Great tutorial!
Additionaly, it was necessary:
1) to create a solution: 'dotnet new sln'
2) to install EF tool: 'dotnet tool install --global dotnet-ef' | https://dev.to/dotnet/how-you-can-use-an-orm-in-net-core-and-c-to-type-less-sql-starring-entity-framework-49ka | CC-MAIN-2020-29 | refinedweb | 3,733 | 64.61 |
Implementing React Native Responsive Design Part 1 : Limiting and Scaling
Phones and tablets come in all shapes and sizes. Now that Mac, Windows and tvOS support is in the works, the variation in screen geometries will only grow even more varied. On the web we have media queries and responsive layouts via Cascading Style Sheets. What tools does a React Native developer have to provide similar layout flexibility? As usual, the answer is “that depends”. What does your app do? There is no one size solution for all apps. In this series of posts we will introduce the tools React Native provides and see how to apply them to develop apps that notice and take advantage of the available screen real estate.
React Native gives us device independent pixels (DIP) which helps to abstract away the varying screen densities, but doesn’t help with the wide variety of screen sizes. Some apps have fairly simple layouts that scale nicely from an iPhone SE at 568x320 pixels to a 12in. iPad Pro at 1024x1366. But those instances are pretty rare. Even in those cases you sometimes want to improve the user experience by taking better advantage of the extra screen real estate. How can we build UIs that adapt to the device screen size like we would on the web? Here are some strategies for building the app of your dreams.
Limiting screen size support
For some apps, simpler is better. You can choose to not support tablets and even remove rotation support. Removing rotation support is particularly user unfriendly so only choose that option if you have a really good reason to do so. For iOS, in XCode you can uncheck the iPad checkbox in General/Deployment Info and uncheck rotation options if you choose.
For Android, in the AndroidManifest.xml file you can find the supports-screens section and set the largest screen sizes to false. To remove rotation support, just find the
<supports-screens android:
<activity ... android:screenOrientation="portrait" ... >
Even if this option makes the most sense for your app, keep reading. Later we will cover font scaling which you should consider supporting for your users with low vision.
Screen scaling
Some apps can fairly easily support tablet screen sizes using flexbox or percents to partition the screen into sections. This works well if the same layout works for all screen sizes. Scaling your app in this way requires choosing image resolutions and font sizes to complement the screen size of the device.
Device Independent Pixels
Screens come in a dizzying array of densities from the original iPhone at 163 pixels per inch to modern phones with densities greater than 460ppi. React Native helps us out with Device Independent Pixels which smooth out most of those size differences for us. If you define a button with a height of 80, it will be approximately half an inch tall on all devices. In a related manner, for local images, the React Native Image component will even correctly choose between image.png, image@2x.png and image@3x.png to match the device’s resolution. For local images it may be useful to have small, medium and large versions of key images to handle differing screen sizes with images that are sharp for their context. As usual, there is a tradeoff here. Adding more images gives a sharper picture but also increases the download size of your app.
Image scaling
One really nice feature of the React Native Image component is that you can pass an array of sourceImage objects the Image component. This array of objects includes source, width and height. The component will use the size of the container to pick the image from the list that is the best match. If you prefer, you can also have complete control of this process by measuring the width of the container on layout and using
PixelRatio.getPixelSizeForLayoutSize to translate DIPs into physical device pixels. Here's a snippet that demonstrates this technique.
import dogSmall from './assets/dog_640.png' import dogBig from './assets/dog_1280.png' ... const images = [ Image.resolveAssetSource(dogSmall), Image.resolveAssetSource(dogBig), ] ... <Image style={styles.image} source={images} />
Font scaling
Font sizes are also automatically scaled based on the device's pixel density to keep font sizes similar across devices. However, if your app is using the same layout scaled across all device sizes, you will likely need to do some amount of adaptive font sizing. If you have a layout section that is 20% of the devices' height, the text within it will look crowded on smaller devices and sparse on larger ones. Another part of the font size equation is whether the user has set a text size preference on their device. React Native supports the device text size preference out of the box so your text may look bigger or smaller to some users than you expected. React Native provides
PixelRatio.getFontScale to give you an idea of your user's preference. If
getFontScale matches the number reported by
PixelRatio.get, the user has no preference set. Setting the text size relative to the window width gives you text that wraps to roughly the same number of lines on all screen sizes, but may not always be what you want. One way to handle that is the following.
function getFontSizeByWindowWidth(windowWidth, fontSize) { const baseWidth = 320; // width of smallest iPhone return PixelRatio.roundToNearestPixel(fontSize * (windowWidth / baseWidth)); } const fontSize = getFontSizeByWindowWidth(window.width, 14)
The biggest gotcha with scaling in proportion to window width is orientation. If the user rotates their device the window width will change and the font size will recalculate. This is usually bad. A small update to the sample code can take care of this by always using the smaller of width and height so the text size stays consistent during an orientation change. Here's a code snippet and screenshots to demonstrate this refinement.
function getOrientation(window) { return (window.width < window.height) ? 'portrait' : 'landscape' } function getFontSizeByWindowWidth(window, fontSize) { const baseWidth = 320; // width of smallest iPhone const width = (getOrientation(window) == 'portrait') ? window.width : window.height return PixelRatio.roundToNearestPixel(fontSize * (width / baseWidth)); } const fontSize = getFontSizeByWindowWidth(window, 14)
Putting this all together gives us an app that scales nicely no matter the screen size of the device. Stay tuned for the next episode where we will dive into building layouts that change based on screen size and orientation. If you would like to see these concepts in situ, here's a screenshot and a link to an Expo Snack. It is a blank starter template with a few tweaks to demonstrate the concepts we've covered so far.
| https://bendyworks.com/blog/implementing-react-native-responsive-design-part-1/index | CC-MAIN-2021-31 | refinedweb | 1,101 | 64.71 |
Important: Please read the Qt Code of Conduct -
The MouseEvent of Qlabel doesn't work, after an advanced click on the label.
I' m using PyQt5 and QtDesigner on Ubuntu platform to develop an application and confronted with a problem on Qlabel.MouseEvent. To illustrate the problem, I make a simple UI example ( Fig. 1) in QtDesigner and then convert the .ui to python file for further development.
Fig.1 Making an simple example in QtDesigner.
The .ui file is here and the modified code is below, where ui_test is generated by pyuic5 command:
import sys from PyQt5.QtWidgets import QApplication, QMainWindow from PyQt5.QtCore import QEvent from ui_test import Ui_MainWindow class MyMainWindow(QMainWindow, Ui_MainWindow): def __init__(self): super(MyMainWindow, self).__init__() self.setupUi(self) # Connect slots: self.pushButton.clicked.connect(self.btn_callback) def pressed_callback(self, event): # Just print the pixel location of cursor in the textLabel self.textLabel.setText("(px: " + str(event.pos().x()) + " , " + "py: " + str(event.pos().y()) + ")") # print((event.pos().x(), event.pos().y())) def btn_callback(self): # Push to enable print the pixel loction of mouse cursor on the mainLabel self.mainLabel.mousePressEvent = self.pressed_callback if __name__ == "__main__": app = QApplication(sys.argv) win = MyMainWindow() win.show() sys.exit(app.exec_())
I can click the button to enable print the pixel loction of mouse cursor when I press the mainLabel arrea (Fig.2). The app works well if I press the button at first. But, the self.mainLabel.mousePressEvent won't work at all if I do a click on the mainLabel before I click the button. Then nothing will happen if I press the mainLabel. How can I fix it?
- SGaist Lifetime Qt Champion last edited by
Hi and welcome to devnet,
@ZhongQL said in The MouseEvent of Qlabel doesn't work, after an advanced click on the label.:
def btn_callback(self):
# Push to enable print the pixel loction of mouse cursor on the mainLabel
self.mainLabel.mousePressEvent = self.pressed_callback
Why are you replacing mousePressEvent with pressed_calldback ? You are nuking the original functionality by doing that.
@SGaist I want to change the reaction of MouseEvent by pressing a button, what' s the right way to do that?
- SGaist Lifetime Qt Champion last edited by
What exactly do you want to happen when clicking on that QLabel ?
The proper way is to create a subclass of QLabel and re-implement mousePressEvent however you can do things a bit differently since you are using Python. | https://forum.qt.io/topic/133183/the-mouseevent-of-qlabel-doesn-t-work-after-an-advanced-click-on-the-label | CC-MAIN-2022-05 | refinedweb | 406 | 50.12 |
Am 17.10.2010 19:51, schrieb TomF: > On 2010-10-17 10:21:36 -0700, Paul Kölle said: >> Am 17.10.2010 13:48, schrieb Steven D'Aprano: >>> On Sun, 17 Oct 2010 03:58:21 -0700, Yingjie Lan wrote: >>> >>>> Hi, >>>> >>>> I played with an example related to namespaces/scoping. The result is a >>>> little confusing: >>> >>> [snip example of UnboundLocalError] >>> >>> Python's scoping rules are such that if you assign to a variable >>> inside a >>> function, it is treated as a local. In your function, you do this: >>> >>> def f(): >>> a = a + 1 >>> >>> Since a is treated as a local, when you enter the function the local >>> a is >>> unbound -- it does not have a value. So the right hand side fails, since >>> local a does not exist, and you get an UnboundLocalError. You are trying >>> to get the value of local "a" when it doesn't have a value. > > Steven's explanation is correct. In your example below you're altering > portions of a global data structure, not reassigning a global variable. > Put another way, there is a significant difference between: > a = 7 > and: > a['x'] = 7 > > Only the first reassigns a global variable. Thanks Tom and Dennis. This will teach me (hopefully) to pay attention to details next time and I think I learned something too. I always thought the rules about changing "global" objects where inconsistent because it works for mutables... Turns out it's all fine since assignment doesn't work for mutables too and assignment just happens to be the only way to "change" immutables ;) cheers Paul > > -Tom > >> Oh really? Can you explain the following? >> >> >>> a = {} >> >>> def foo(): >> ... a['a'] = 'lowercase a' >> ... print a.keys() >> ... >> >>> foo() >> ['a'] >> >>> a >> {'a': 'lowercase a'} >> >>> def bar(): >> ... a['b'] = a['a'].replace('a', 'b') >> ... >> >>> bar() >> >>> a >> {'a': 'lowercase a', 'b': 'lowercbse b'} >> >>> >> >> cheers >> Paul > > | https://mail.python.org/pipermail/python-list/2010-October/589966.html | CC-MAIN-2016-44 | refinedweb | 306 | 72.66 |
Fast GeoSpatial Analysis in Python
This work is supported by Anaconda Inc., the Data Driven Discovery Initiative from the Moore Foundation, and NASA SBIR NNX16CG43P
This work is a collaboration with Joris Van den Bossche. This blogpost builds on Joris’s EuroSciPy talk (slides) on the same topic. You can also see Joris’ blogpost on this same topic.
TL;DR:.
We start by reproducing a blogpost published last June, but with 30x speedups. Then we talk about how we achieved the speedup with Cython and Dask.
All code in this post is experimental. It should not be relied upon.
Experiment
In June Ravi Shekhar published a blogpost Geospatial Operations at Scale with Dask and GeoPandas in which he counted the number of rides originating from each of the official taxi zones of New York City. He read, processed, and plotted 120 million rides, performing an expensive point-in-polygon test for each ride, and produced a figure much like the following:
This took about three hours on his laptop. He used Dask and a bit of custom code to parallelize Geopandas across all of his cores. Using this combination he got close to the speed of PostGIS, but from Python.
Today, using an accelerated GeoPandas and a new dask-geopandas library, we can do the above computation in around eight minutes (half of which is reading CSV files) and so can produce a number of other interesting images with faster interaction times.
A full notebook producing these plots is available below:
The rest of this article talks about GeoPandas, Cython, and speeding up geospatial data analysis.
Background in Geospatial Data
The Shapely User Manual begins with the following passage on the utility of geospatial analysis to our society. part of Python’s GeoSpatial stack which is currently composed of the following libraries:
- Shapely: Manages shapes like points, linestrings, and polygons. Wraps the GEOS C++ library
- Fiona: Handles data ingestion. Wraps the GDAL library
- Rasterio: Handles raster data like satelite imagery
- GeoPandas: Extends Pandas with a column of shapely geometries to intuitively query tables of geospatially annotated data..
In this post we focus on GeoPandas, a geospatial extension of Pandas which manages tabular data that is annotated with geometry information like points, paths, and polygons.
GeoPandas Example
GeoPandas makes it easy to load, manipulate, and plot geospatial data. For example, we can download the NYC taxi zones, load and plot them in a single line of code.
geopandas.read_file('taxi_zones.shp') .to_crs({'init' :'epsg:4326'}) .plot(column='borough', categorical=True)
Cities are now doing a wonderful job publishing data into the open. This provides transparency and an opportunity for civic involvement to help analyze, understand, and improve our communities. Here are a few fun geospatially-aware datasets to make you interested:
- Chicago Crimes from 2001 to present (one week ago)
- Paris Velib (bikeshare) in real time
- Bike lanes in New Orleans
- New Orleans Police Department incidents involving the use of force
Performance.
This slowdown is because GeoPandas wraps each geometry (like a point, line, or
polygon) with a Shapely object and stores all of those objects in an
object-dtype column. When we compute a GeoPandas operation on all of our
shapes we just iterate over these shapes in Python. As an example, here is how
one might implement a distance method in GeoPandas today.
def distance(self, other): result = [geom.distance(other) for geom in self.geometry] return pd.Series(result)
Unfortunately this just iterates over elements in the series, each of which is an individual Shapely object. This is inefficient for two reasons:
- Iterating through Python objects is slow relative to iterating through those same objects in C.
- Shapely Python objects consume more memory than the GEOS Geometry objects that they wrap.
This results in slow performance.
Cythonizing GeoPandas
Fortunately, we’ve rewritten GeoPandas with Cython to directly loop over the
underlying GEOS pointers. This provides a 10-100x speedup depending on the
operation.
So instead of using a Pandas
object-dtype column that holds shapely objects
we instead store a NumPy array of direct pointers to the GEOS objects.
Before
After
As an example, our function for distance now looks like the following Cython implementation (some liberties taken for brevity):
cpdef distance(self, other): cdef int n = self.size cdef GEOSGeometry *left_geom cdef GEOSGeometry *right_geom = other.__geom__ # a geometry pointer geometries = self._geometry_array with nogil: for idx in xrange(n): left_geom = <GEOSGeometry *> geometries[idx] if left_geom != NULL: distance = GEOSDistance_r(left_geom, some_point.__geom) else: distance = NaN
For fast operations we see speedups of 100x. For slower operations we’re closer to 10x. Now these operations run at full C speed.
In his EuroSciPy talk Joris compares the performance of GeoPandas (both before and after Cython) with PostGIS, the standard geospatial plugin for the popular PostgreSQL database (original notebook with the comparison). I’m stealing some plots from his talk below:.
This is great. The Python GIS stack now has a full-speed library that operates as fast as any other open GIS system is likely to manage.
Problems
However, this is still a work in progress, and there is still plenty of work to this issue on the pandas issue tracker)..
Third, there are some algorithms within GeoPandas that we haven’t yet Cythonized. This includes both particular features like overlay and dissolve operations as well as small components like GeoJSON output.
Finally as with any rewrite on a codebase that is not exhaustively tested (we’re trying to improve testing as we do this) there are probably several bugs that we won’t detect until some patient and forgiving user runs into them first..
You can track future progress on this effort at geopandas/geopandas #473 which includes installation instructions.
Parallelize with Dask dask-geopandas library available on GitHub.
So just as dask-array organizes many NumPy arrays along a grid and dask-dataframe organizes many Pandas dataframes along a linear index.
This gives us two advantages:
- Even without geospatial partitioning, we can use many cores (or many machines) to accelerate simple operations.
- For spatially aware operations, like spatial joins or subselections we can engage only those parts of the parallel dataframe that we know are relevant for various parts of the computation.
However this is also expensive and not always necessary. In our initial exercise with the NYC Taxi data we didn’t do this, and will still got significant speedups just from normal multicore operation.
Exercise:
import dask.dataframe as dd import dask_geopandas as dg df = dd.read_csv('yellow_tripdata_2015-*.csv') gf = dg.set_geometry(df, geometry=df[['pickup_longitude', 'pickup_latitude']], crs={'init' :'epsg:4326'}) gf = dg.sjoin(gf, zones[['zone', 'borough', 'geometry']]) full = gf[['zone', 'payment_type', 'tip_amount', 'fare_amount']] full.to_parquet('nyc-zones.parquet') # compute and cache result on disk full = dd.read_parquet('nyc-zones.parquet')
And then we can do typical groupbys and joins on the more typical pandas-like data now properly annotated with zones.
result = full.passenger_count.groupby(full.zone).count().compute() result.name = 'count' joined = pd.merge(result.to_frame(), zones, left_index=True, right_on='zone') joined = geopandas.GeoDataFrame(joined) # convert back for plotting
We’ve replaced most of Ravi’s custom analysis with a few lines of new standard code. This maxes our or CPU when doing spatial joins. Everything here releases the GIL well and the entire computation operates in under a couple gigabytes of RAM.
Problems
The dask-geopand.
Serialization costs are manageable, but decently high. We currently use the standard “well known binary” WKB format common in other geospatial applications but have found it to be fairly slow, which bogs down inter-process parallelism.
Similarly distributed and spatially partitioned data stores don’t seem to be common (or at least I haven’t run across them yet)..
Still though, these seem surmountable and generally this process has been easy so far. I suspect that we can build an intuitive and performant parallel GIS analytics system with modest effort.
The notebook for the example at the start of the blogpost shows using dask-geopandas with good results.
Conclusion. | https://matthewrocklin.com/blog/work/2017/09/21/accelerating-geopandas-1 | CC-MAIN-2021-39 | refinedweb | 1,334 | 55.44 |
Flashlight (Xamarin.Essentials API of the Week)
This week, James is joined by friend of the show Dean Faizel, Microsoft Mobile Customer Advisory Team Engineer, who talks us through the best practices of using async and await in mobile development.
Show Links:
Useful Links:
Just trying to see what's out there.
Good stuff! Very helpful and informative. Glad you covered the pattern of "return await" instead of returning the Task directly when possible!
P.S - Hi Dean!!
- PK
Bad task return (9 seconds) was actually quicker than good task return (13 seconds). Tthe presenter said good task return was 6 seconds but clearly showed 13 seconds on the screen.
In one of your examples, your fix was to call Device.BeginInvokeOnMainThread(). Why didn't you just delete the ConfigureAwait(False) from the line before? Wouldn't that have had the same result?
Thanks!
Part about ConfigureAwait is basically not true. ConfigureAwait(false) does not guarantee that what is after await will not be called in original context. | https://channel9.msdn.com/Shows/XamarinShow/Best-Practices-Async--Await--The-Xamarin-Show | CC-MAIN-2021-21 | refinedweb | 168 | 66.44 |
digitalmars.D - Re: Scientific computing with D
- Brian Palmer <d brian.codekitchen.net> Jan 31 2009
Walter Bright Wrote:Bill Baxter wrote:Having to recompile and rerun after every one of those changes just isn't quite as direct.
If it can be done in under half a second, isn't that direct enough? Of course, I'm talking about a shell that does it for you.
If anybody is really serious about doing this they might want to look at the implementation of a similar shell for C at. Oddly much of it is written in Haskell, but there's also an early Ruby prototype that might be useful. Essentially each line is wrapped in a function, compiled into a library and then dlopen'ed in the parent process and executed. Assignments are translated into globals. This will be a little more complex in D because there isn't one global namespace. Actually I think this has been brought up on this list before, I can't find the original reference though.
Jan 31 2009 | http://www.digitalmars.com/d/archives/digitalmars/D/Re_Scientific_computing_with_D_83132.html | CC-MAIN-2016-44 | refinedweb | 176 | 64.81 |
I have many contact form in my site. I configured the form to clear on submission.
When user submitted the form it clears the form as expected but its show an message "The changes were saved." is there a way to change this to some other text site wide are configurable in form.
I know there is an option to "Display text" is available but it wont display the clear form for further submission to the user.
Hello,
This text is taken from the CMSResources/CMS.resx file, so please look for the general.changessaved key and change its value to whatever you need.
Best regards,
Jan Hermann
@Jan Hermann
Whether this resx text was used only for form or its used in some other module are admin section.
The general namespace indicates it is general confirmation message and it is used on other places within the administration.
So Changing this resource text wont be a proper solution to change the text in form. Is there any other way to override that text.
Vasanth, in your scenario you are expecting visitor to submit contact us form twice or more while on the same page? How often does that happen? If you use Display message, and visitor should normally go to other pages on the site and if they need to use the form again, when they come back, the form will be displayed again.
As Rui suggested, I'd direct the user to another page and display the "thank you" or your personalized save message with some additional links to possibly fill out another form or go somewhere else.
Thanks Jan Hermann
I finally end up with displaying Text rather than show the form again.
Please, sign in to be able to submit a new answer. | https://devnet.kentico.com/questions/change-biz-for-default-submission-message | CC-MAIN-2018-30 | refinedweb | 296 | 71.95 |
What Happens When IPv4 Address Space Is Gone
timothy posted more than 4 years ago | from the stars-wink-out-one-by-one dept.
(4, Funny)
Nerdfest (867930) | more than 4 years ago | (#31968778)
Re:The Internet is Full (0)
Anonymous Coward | more than 4 years ago | (#31968816)
I'll sell you my IP address for $25
Re:The Internet is Full (4, Funny)
mrsteveman1 (1010381) | more than 4 years ago | (#31968894)
OK, but i want it cleaned first, your IP address has been to every porn site on the internet.
Re:The Internet is Full (3, Interesting)
sopssa (1498795) | more than 4 years ago | (#31968904)
This is the more likely situation. The address price wont just run out but the prices will increase. Cost of one ip address is $0.5-$1 currently. IPv6 is not ready for mainstream use yet. If we ever run out of addresses, it doesn't mean they won't be available. It just means you have pay more for them.
Re:The Internet is Full (3, Informative)
mikael_j (106439) | more than 4 years ago | (#31969060)
Oh great, artificial scarcity caused by greedy bastards refusing to upgrade because they're either too cheap to upgrade or looking to make a buck selling unused addresses...
Re:The Internet is Full (5, Funny)
Anonymous Coward | more than 4 years ago | (#31968844)
Just put the internet behind a NAT. Simple.
Re:The Internet is Full (1)
h00manist (800926) | more than 4 years ago | (#31968878)
Re:The Internet is Full (4, Insightful)
Bigjeff5 (1143585) | more than 4 years ago | (#31969102).
Weird hacks (1)
Mike Rice (626857) | more than 4 years ago | (#31969120)
Like NAT?
Re:The Internet is Full (5, Funny)
MBCook (132727) | more than 4 years ago | (#31968896)
Re:The Internet is Full (0)
Anonymous Coward | more than 4 years ago | (#31969032)
Gives me a great excuse to never turn off my computer.
:)
Re:The Internet is Full (0, Offtopic)
rliden (1473185) | more than 4 years ago | (#31969190)
I was wondering why I got a busy signal through my DSL router this morning.
dev/null (4, Funny)
SimonTheSoundMan (1012395) | more than 4 years ago | (#31968786)
Send users to dev/null.
Hmmm (3, Insightful)
WrongSizeGlass (838941) | more than 4 years ago | (#31968804) (1, Insightful)
Anonymous Coward | more than 4 years ago | (#31968846)
Much like how if we had conserved our petroleum resources in the beginning, we wouldn't be freaking over the potential for shortage in this age...
Re:Hmmm (4, Insightful)
geniusj (140174) | more than 4 years ago | (#31968870):Hmmm (3, Insightful)
Bigjeff5 (1143585) | more than 4 years ago | (#31969138), get a grip! We've known the solution to the problem since the early 90's, at least, and implementing it is trivial.
Re:Hmmm (4, Insightful)
slimjim8094 (941042) | more than 4 years ago | (#31968888):Hmmm (2, Insightful)
h00manist (800926) | more than 4 years ago | (#31968974)
but they'll definitely consider just NATting new customers.
Trouble is, 99% of users won't even notice. If they profile the users to figure out which ones won't notice beforehand, even more.
Re:Hmmm (1, Interesting)
Anonymous Coward | more than 4 years ago | (#31969126)
but they'll definitely consider just NATting new customers.
Trouble is, 99% of users won't even notice. If they profile the users to figure out which ones won't notice beforehand, even more.
Naw, they'll just NAT everyone and charge users that want a publically addressable IP. They will give the tier a name like "Gamer Pro" and the chart that lists differences between packages will have a new row for "Ability to host internet games" or something like that.
Re:Hmmm (1)
Bigjeff5 (1143585) | more than 4 years ago | (#31969156)
99% of users have computers that handle IPv6 just fine, most consumer routers even do it just fine.
This is such a non-issue it's just hilarious watching everybody freak about it.
Re:Hmmm (2, Interesting)
h00manist (800926) | more than 4 years ago | (#31968944)
Re:Hmmm (1)
6350' (936630) | more than 4 years ago | (#31969108)
Re:Hmmm (1)
moreati (119629) | more than 4 years ago | (#31969118)
Is this satire or industry analysis? I can't tell.
Re:Hmmm (1)
h00manist (800926) | more than 4 years ago | (#31969154)
Re:Hmmm (5, Informative)
Anonymous Coward | more than 4 years ago | (#31968970):Hmmm (1)
Interoperable (1651953) | more than 4 years ago | (#31969014):Hmmm (3, Funny)
jsepeta (412566) | more than 4 years ago | (#31969018)
I agree.
Also I suggest opening up
.XXX and make all the porn guys move their sites to the .XXX namespace. Plus make them migrate to IPV6 so the rest of us can just stick with IPV4
Re:Hmmm (4, Interesting)
Burdell (228580) | more than 4 years ago | (#31969084):Hmmm (1)
aynoknman (1071612) | more than 4 years ago | (#31969088) (0)
Anonymous Coward | more than 4 years ago | (#31969110)
what they need to do is remove all the
/8 that they gave to large spam gangs.
Re:Hmmm (0)
Anonymous Coward | more than 4 years ago | (#31969136)
They sort of do and the rules when they do so have become stricter. However, there is no economic incentive to find ways of doing things with fewer addresses. On the contrary, as long as there is IPv4 address space, it is wise to get as much of it as you can by offering applications to your users which justify IPv4 allocations. Then, when the IPv4 address space runs out, you can internally reallocate addresses to the most profitable applications, i.e. instead of giving several free IP addresses to DSL users, you could start charging for extra IP addresses (or even put DSL users behind NAT like on 3G networks) and use the reclaimed addresses for servers. When the IPv4 addresses run out, all major internet and hosting providers will have lots of IPv4 addresses stashed away in uses which technically justify the allocations but are really just excuses to hoard the space.
ARIN is the regional internet registry which is the most likely to run out of addresses first. Other RIRs use up their allocations more slowly. (At the time when the last but five
/8 block is allocated to a RIR, each RIR gets one last /8 block and then they're on their own. Here's the policy. [icann.org] ) The day ARIN runs out of IP addresses is not the day when the last available IPv4 address has been allocated. The other RIRs will still have addresses, some for a very long time. Existing ISPs affected by ARIN's running out of addresses will also be able to shift their addresses around. The only ones who will be (quite dramatically) burned on that date are new operators who need multihomed address space.
Re:Hmmm (1)
bob5972 (693297) | more than 4 years ago | (#31969178).
Re:Hmmm (4, Insightful)
divisionbyzero (300681) | more than 4 years ago | (#31969180).
Auction? (0)
Anonymous Coward | more than 4 years ago | (#31968808)
I'll bet the likes of IBM, DEC, and others were originally assigned enormous blocks of addresses that they are barely touching. I wonder if stats exist on the number of unused reserved addresses?
Re:Auction? (3, Informative)
Gerald (9696) | more than 4 years ago | (#31968864)
There are a few. See figure 5 of Geoff Huston's IPv4 Address Report [potaroo.net] .
Re:Auction? (2, Insightful)
koiransuklaa (1502579) | more than 4 years ago | (#31969044).
why even have an ip.v whatever (-1, Troll)
Anonymous Coward | more than 4 years ago | (#31968812)
people type in addresses ex so why can't that be a direct address instead of processing it into something else.
Re:why even have an ip.v whatever (0)
Anonymous Coward | more than 4 years ago | (#31968978)
It'd be extremely inefficient. With numbers, you know that all IPs belonging to 123.222.X.Y can be handled by a router belonging to ISP XYZ on a particular fiber connection, and XYZ can route the packets to the right customer. You can't break down names like that, because there's millions of domains ending in
.org.
Re:why even have an ip.v whatever (0)
Anonymous Coward | more than 4 years ago | (#31968990)
You obviously have no clue how this all works do you..
So now the question is... (1)
fm6 (162816) | more than 4 years ago | (#31968824)
Who's even trying to transition to IPv6? Considering how close we are to IPv4 Ragnarök, the changeover should be close to finished by now. I don't see any real sign that it's even started.
Re:So now the question is... (0)
Anonymous Coward | more than 4 years ago | (#31968880)
It isn't started. Who would it be? The organisation that have ample IP4 adresses have no need to change.
In fact, only the new applicants that won't get IP4 have any stake in this matter. They want change to happen, but why would the existing infrastructure change? They have nothing to gain and everything to lose...
Re:So now the question is... (1)
gbjbaanb (229885) | more than 4 years ago | (#31968882) interesting... I've got my popcorn ready and am going to have fun watching the sparks fly when ARIN first says 'no'.
Re:So now the question is... (2, Insightful)
fm6 (162816) | more than 4 years ago | (#31969042). Call it the WalMart effect.
The only solution is to move to IPv6. But, as you point out, people won't do this until they have to.
No, worse, they won't even begin preparations. Not a big deal for most of us, but the changeover is going to be non-trivial for ISPs, manufacturers, and a lot of other people who do Internet infrastructure.
When I was at Sun, I was on a product team for a new product with an embedded Service Processor (for remote control, diagnostics, lights-out management, etc.). Whenever I suggested that the new SP have IPv6 support, I was told "none of our customers is asking for this feature."
Re:So now the question is... (1)
icebraining (1313345) | more than 4 years ago | (#31969122)
$30 is 63% of what I pay yearly for hosting.
Re:So now the question is... (4, Informative)
Gerald (9696) | more than 4 years ago | (#31968890)
Trying? I'm done.
Re:So now the question is... (1)
johnw (3725) | more than 4 years ago | (#31968956)
Me too.
This posting coming to you from 2001:8b0:e9:1:222:69ff:fe07:5046
Re:So now the question is... (1)
fm6 (162816) | more than 4 years ago | (#31969096)
Good for you. But hackers who've transitioned their personal networks isn't going to help much if the main Internet infrastructure doesn't support the new stack.
Re:So now the question is... (0)
Anonymous Coward | more than 4 years ago | (#31968892)
Re:So now the question is... (1)
rtyhurst (460717) | more than 4 years ago | (#31968988)
Looks like we're heading back to two tin cans with a string between them.
One bonus: no malware...
Easy (3, Funny)
networkzombie (921324) | more than 4 years ago | (#31968826)
Re:Easy (1)
SimonTheSoundMan (1012395) | more than 4 years ago | (#31968860)
You run out of IP addresses on your LAN?
Re:Easy (2, Informative)
lukas84 (912874) | more than 4 years ago | (#31968968)
Happens often in small companies that grow and run only a single subnet with a
/24.
While this is always easy to fix, some companies don't want to risk restructuring their LAN.
Why run IPV6? (1)
h00manist (800926) | more than 4 years ago | (#31968828)
Re:Why run IPV6? (0)
Anonymous Coward | more than 4 years ago | (#31968866)
File sharing without port forwarding?
Re:Why run IPV6? (1)
h00manist (800926) | more than 4 years ago | (#31969050)
Re:Why run IPV6? (1)
johnw (3725) | more than 4 years ago | (#31968900)
Every once in a while I think about it, then I can't find a reason. Anyone?
ipv6porn?
Re:Why run IPV6? (0)
Anonymous Coward | more than 4 years ago | (#31969114)
That project died quite a while ago and never went live beyond a simple test page.
Re:Why run IPV6? (1)
fm6 (162816) | more than 4 years ago | (#31968908)
Right, it's somebody else problem. The question is, who?
everybody somebody nobody anybody (5, Funny)
h00manist (800926) | more than 4 years ago | (#31969000)
Re:Why run IPV6? (5, Insightful)
slimjim8094 (941042) | more than 4 years ago | (#31968932):Why run IPV6? (1)
h00manist (800926) | more than 4 years ago | (#31969034)
Re:Why run IPV6? (2, Insightful)
icebraining (1313345) | more than 4 years ago | (#31969148)
Well, personally I'm not into BSDM. NAT is an unnecessary pain and a ugly hack that raises complexity and breaks stuff.
Re:Why run IPV6? (1)
green1 (322787) | more than 4 years ago | (#31969056):Why run IPV6? (0)
Anonymous Coward | more than 4 years ago | (#31969072)
I don't think you make an argument for IPv6 here. Skype works with IPv4, as do BitTorrent, FTP and the other examples.
So IPv6 isn't giving any benefit here.
The problem is that IPv4 works fine, is very well understood and is easy to administer. NAT, while not idea and an occasional annoyance when gaming, is only a small thorn in IPv4's side; the incidental security is a benefit and 1 IP address per subscriber also simplifies administration for ISPs.
Perhaps the end of /. stories on end of IPv4 (4, Funny)
haus (129916) | more than 4 years ago | (#31968830)
But somehow I doubt it.
Re:Perhaps the end of /. stories on end of IPv4 (0)
Anonymous Coward | more than 4 years ago | (#31968938)
The end of the internet (1)
c1ay (703047) | more than 4 years ago | (#31968834)
where is the Restaurant? (0)
Anonymous Coward | more than 4 years ago | (#31969026)
Dr. Peter Venkman: This internet... (0)
Anonymous Coward | more than 4 years ago | (#31968842)
Dr. Peter Venkman: This internet is headed for a disaster of biblical proportions.
Politician: What do you mean, "biblical"?
Dr Ray Stantz: What he means is Old Testament, Mr. Politician,!
Hmm no big deal will happen? (1)
h00manist (800926) | more than 4 years ago | (#31968850)
Re:Hmm no big deal will happen? (4, Informative)
Dragoniz3r (992309) | more than 4 years ago | (#31969030)
well... (1)
MrCrassic (994046) | more than 4 years ago | (#31968852)
Bidding wars will begin (1)
symbolset (646467) | more than 4 years ago | (#31969024)
Even now companies are hoarding IPV4 address space. More companies will invest in these valuable collectibles, locking up ever larger unused ranges. New markets in IPv4 address futures will arise. Rising costs, or claims thereof, will lead to ISPs charging even more for the temporary use of these valuable commodities. Great profits will be made before the migration to IPv6 is complete.
I hope Windows 7 x64 IP is fixed by then. (1)
The Altruist (1448701) | more than 4 years ago | (#31968856)
Re:I hope Windows 7 x64 IP is fixed by then. (1)
johnw (3725) | more than 4 years ago | (#31968928).
I see several things happening (1)
petermgreen (876956) | more than 4 years ago | (#31968868):I see several things happening (1)
thoughtsatthemoment (1687848) | more than 4 years ago | (#31968982)
Re:I see several things happening (1)
h00manist (800926) | more than 4 years ago | (#31969128)
Re:I see several things happening (1)
thoughtsatthemoment (1687848) | more than 4 years ago | (#31969174)
What about IPv6? (0)
Anonymous Coward | more than 4 years ago | (#31968902)
That thing has existed for a decade or so...
Why aren't we using it?
Using IPv6 would be the obvious solution to this problem.
IPv4 space ran out long ago (1)
MrBucket101 (1395241) | more than 4 years ago | (#31968912)
December 21, 2012 (1)
zidane2k1 (971794) | more than 4 years ago | (#31968926)
It's time to get tough (0)
Anonymous Coward | more than 4 years ago | (#31968930)
It is time to get tough with companies that are burying their heads in the sand and not preparing IPv6 deployments for the day when the IPv4 Internet stops growing. Financial Analysts on Wall Street should be asking tough questions to the CEOs of any publicly traded company.
For some companies, who haven't got their acts together, this will be a crisis that could sink the business. This is going to have a far greater impact than the minor disruption of transitioning the Internet to IPv6 in a time when the only way to get an IPv4 address is to shut something else off. Most companies could handle this transition if they had already started testing and trialing IPv6 today, but some companies are woefully far behind, and they will find that this causes their sales to grind to a halt. When there are no more IPv4 addresses, they can't hookup new customers. And they can't add new sites to existing customers, which will cause a customer exodus to other companies that have their IPv6 deployed and ready. That exodus will gather speed due to all the press coverage.
For instance, the shortage will hit us in 2012, an Olympic year. What happens if they can't get enough IPv4 addresses to extend the network into the Olympic park and the athlete apartments? That would be a global disaster for whoever is responsible.
China will probably cut over first (1)
Animats (122034) | more than 4 years ago | (#31968934).
Re:China will probably cut over first (1)
jasmusic (786052) | more than 4 years ago | (#31969188)
Why not break open the Class E block? (1)
Will Sargent (2751) | more than 4 years ago | (#31968936)
The entire 240/ block is reserved. Is there something wrong with those IP addresses?
Re:Why not break open the Class E block? (1)
Trolan (42526) | more than 4 years ago | (#31969054) more time than 1.33 years.
Re:Why not break open the Class E block? (0)
Anonymous Coward | more than 4 years ago | (#31969070)
The amount of legacy hardware and software that just plain won't route reserved IPs, the number of idiots who've set up local networks on it, and the fact that at current allocation rates it would only buy a few weeks?
Time to start hoarding... (3, Funny)
JorDan Clock (664877) | more than 4 years ago | (#31968946)
in the short term... (3, Insightful)
Sir_Sri (199544) | more than 4 years ago | (#31968986) (1)
houghi (78078) | more than 4 years ago | (#31969016) and months. Will IPv6 finally take over or will providers start giving out internal IP addresses for their customers and charge double for those that want a fixed one?
IP-based lawsuits (1)
jda104 (1652769) | more than 4 years ago | (#31969068)
Recover ipv4 agress space from horders (0)
Anonymous Coward | more than 4 years ago | (#31969082)
Arin should require companies that merge to return IP address blocks they really don't need. Example HP has 2 class A address blocks HP had 1 (Net 15) then they bought Compaq and received another (Net 16) (along with a multitude of B and C blocks they acquired). If my math is correct that's over 32 million address in those 2 blocks alone. Do they need that many? Should they be required to move to 1 class A and return the other, I would say so. With all the mergers and acquisitions that have happened since the Dot com bubble there are a lot of companies sitting on blocks that they don't need.
ARIN should require that these companies return the blocks in a set period of time. This would allow legitimate needs to be addressed and give more time for IPV4. Frankly most companies could just use the private classes internally and only use public addresses for the systems that need them. HP, IBM, etc could use class A 10.x.x.x private internally and use a smaller block for external access. Today's Nat implementations could take care of the rest.
Just a thought
ARIN could even pay these companies a return fee to get the blocks back.
Yawn (1)
Kjella (173770) | more than 4 years ago | (#31969104)? Eventually the cost/benefit will tip in the direction of IPv6. But I'm betting it'll be more like 2010 than next year.
IPv6 and telephone numbers (1)
thoughtsatthemoment (1687848) | more than 4 years ago | (#31969142)
Almost gone? (0)
Anonymous Coward | more than 4 years ago | (#31969144)
Or is it greedy organizations hoarding addresses that they'll never use?
Windows and IPv6 (1)
AndGodSed (968378) | more than 4 years ago | (#31969158)...
Unused addresses are wasted addresses (1)
noidentity (188756) | more than 4 years ago | (#31969166)
Contact tne Class B holders (2, Insightful)
itsdapead (734413) | more than 4 years ago | (#31969176)
..). | http://beta.slashdot.org/story/134696 | CC-MAIN-2014-42 | refinedweb | 3,428 | 79.7 |
Is there a good way to get the date of the coming Wednesday?
That is, if today is Tuesday, I want to get the date of Wednesday in this week; if today is Wednesday, I want to get the date of next Wednesday; if today is Thursday, I want to get the date of Wednesday in the following week.
Thanks.
The basic algorithm is the following:
Here's a snippet to show how to do this with
java.util.Calendar:
import java.util.Calendar; public class NextWednesday { public static Calendar nextDayOfWeek(int dow) { Calendar date = Calendar.getInstance(); int diff = dow - date.get(Calendar.DAY_OF_WEEK); if (!(diff > 0)) { diff += 7; } date.add(Calendar.DAY_OF_MONTH, diff); return date; } public static void main(String[] args) { System.out.printf( "%ta, %<tb %<te, %<tY", nextDayOfWeek(Calendar.WEDNESDAY) ); } }
Relative to my here and now, the output of the above snippet is
"Wed, Aug 18, 2010".
java.util.Calendar
java.util.Formatter- for the formatting string syntax | https://codedump.io/share/qE6OizD2JpwV/1/is-there-a-good-way-to-get-the-date-of-the-coming-wednesday | CC-MAIN-2018-09 | refinedweb | 160 | 58.79 |
Details
- Type:
Sub-task
- Status: Closed
- Priority:
Critical
- Resolution: Fixed
- Affects Version/s: 2.7.1
- Fix Version/s: 2.8.0, 2.7.2, 2.6.4, 3.0.0-alpha1
- Component/s: resourcemanager
- Labels:None
Description
Cases that can cause this.
- Capacity scheduler xml is wrongly configured during switch
- Refresh ACL failure due to configuration
- Refresh User group failure due to configuration
Continuously both RM will try to be active
dsperf@host-10-128:/opt/bibin/dsperf/OPENSOURCE_3_0/install/hadoop/resourcemanager/bin> ./yarn rmadmin -getServiceState rm1 15/07/07 19:08:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable active dsperf@host-128:/opt/bibin/dsperf/OPENSOURCE_3_0/install/hadoop/resourcemanager/bin> ./yarn rmadmin -getServiceState rm2 15/07/07 19:08:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable active
- Both Web UI active
- Status shown as active for both RM
Activity
- All
- Work Log
- History
- Activity
- Transitions
Thanks Rohith Sharma K S!
I think it should be there in 2.6 too, let me cross confirm it. If exist, I will backport this to 2.6
Hi Bibin A Chundatt, Rohith Sharma K S and Xuan Gong, is this bug also valid on branch-2.6? If so, may be we should consider to backport to branch-2.6?
SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2284
FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #325
- hadoop-yarn-project/CHANGES.txt
FAILURE: Integrated in Hadoop-Hdfs-trunk #2264
SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #335 /test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHA.java
- hadoop-yarn-project/CHANGES.txt
- hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMFatalEventType.java
SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #342 /test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHA.java
FAILURE: Integrated in Hadoop-Yarn-trunk #1070
FAILURE: Integrated in Hadoop-trunk-Commit #8387
+1 lgtm.. Will commit it tomorrow if there is no objections/comments from other folks..
Hi Bibin A Chundatt
2> there are test cases related to transition in TestRMAdminService.testRMHAWithFileSystemBasedConfiguration but most of it is present in TestRMHA so i think it should be fine.
3> Well IMHO it would be better be handled in the later approach i suggested, as refreshAll is just a private method but actual operation is transistionToActive which Failed which is more readable than ACTIVE_REFRESH_FAIL
Attaching patch after handling comments.
- timeout updated in testcase
- Changed from ACTIVE_REFRESH_FAIL to TRANSITION_TO_ACTIVE_FAILED
Hi Naga
Thnks for looking into patch
timeout of 900000 is on the higher side is that much req or was it for local testing ?
will update the same.
instead of test case in RMHA can we think of adding it to TestRMAdminService as the failure is related to transition to Active ?
As i understand all transistiontoActive & HA related testcases are added in same class.
3.TRANSITION_TO_ACTIVE_FAILED is not actually failing its refreshAll rt? Thts the reason it gave specific name.
Points 2 and 3 are not mandatory fix items rt?
Hi Bibin A Chundatt,
Thanks for the patch, test cases ran fine, approach and test case seems to be fine but few comments from my side
- timeout of 900000 is on the higher side is that much req or was it for local testing ?
- instead of test case in RMHA can we think of adding it to TestRMAdminService as the failure is related to transition to Active ?
- May be while throwing RMFatalEvent better to wrap it with another exception wrapping the existing one and with the message that transition to active failed so that RM Logs have clear information on what operation it exited. or may be eventType instead of having ACTIVE_REFRESH_FAIL we can have more intuitive name TRANSITION_TO_ACTIVE_FAILED
Above comments are for
Test failures are not related to this patch. Have looked into the failed testcases
hadoop.yarn.server.resourcemanager.security.TestClientToAMTokens - Due Bind exception
hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService - Locally verified its working fine and success
hadoop.yarn.server.resourcemanager.TestClientRMService -Ran locally in eclipse its working fine
Missed one comment isRMActive check is not required.Attaching patch again
So JVM exit is the conclusion after discussion.
Attaching patch based on the same
As I see this, JVM exit is reasonable as proposed by Rohith earlier. Because scheduler configurations are wrong mostly, and its not required to switch to standby or fail-fast etc. Directly if we can exit JVM, it will be clean and there will be enough information available in logs to analyze for config fail reasons.
Infact according to me, we can crash RM on all times if config is wrong. Because till config is corrected, the RM where config is wrong cannot become active(and hence will be unusable). In that case, fail fast config wont even be required. So should we change the behavior to keep RM in standby(but up) if fail fast is set to false ? Anyways can discuss more in detail face to face.
I do not have any concern for exiting JVM. If fail fast is true(default behavior), JVM will exit anyways.
I was wondering if it would be semantically appropriate to make JVM exit in some cases if somebody has explicitly changed the fail fast config to false. Logs can fill up if yarn-site.xml is wrong on both RMs' too.
I am not sure about the webapp part though. Does it require client rm service to be initialized ? AFAIK, if RM is standby it will hit the webapp filter and redirect to other RM(which may be active). Haven't tested UI after applying previous patches, so maybe Bibin can tell. If there are some issues with webapp, we will have to exit the JVM if transition to standby fails. Because there may be no other way out then.
I will discuss further on this with you offline.
Hi Varun Saxena, trying to understand your point of statement, my suggestion is to exit the RM if any configuration issue during refreshAll during AdminService#transitionToActive. As I given reason for making RM JVM down rather than keeping JVM alive in earlier comment, do you have any concern for exiting the RM for configuration issues?
Saw your comments above. We cant do what we were doing earlier because as you say WebApp should be up even in standby. Let me think if something else can be done.
In previous patches, we were delaying reinitialization till attempting transition to active again and not attempting it immediately as we have done here. Any issues you expect with that ?
Hmm...my point of view based on the fact that the service cannot be up if atleast one RM is not active. Standby RM is not going to serve anything anyways.
Till configurations of this RM are not corrected, whether yarn-site or scheduler configurations, this RM anyways cant become active (refreshAll will always fail). And you can say there might be some silly mistake in scheduler configuration too.
What we were doing before in the patch wont fill up the logs if configuration is ok on other RM. And if its not Ok on other RM, logs will fill up even even if refreshAll fails because of something other than scheduler config(and fail fast is false).
fail fast by default is true, and if admin is making it false, he will know what to expect.
But, you can say a RM shutting down is a far more alarming thing for an admin and scheduler configurations more important. I agree with that. Maybe we can make RM with wrong configuration down at all times. Because till he correct the config(whether yarn-site or scheduler config), this RM cant become active.
Let us take opinion of couple of others as well on this. We can do whatever is the consensus.
There are 2 type of refresh can happen i.e. 1. yarn-site.xml refresh, 2. scheduler configurations refresh. Schduler configurations are reloaded for every service initialization which is by design. If any issue in the scheduler configuration, fail-fast configuraton behavior work as same for both true and false. Fail-fast configuration is useful when admin do mistake in configuring mistake in yarn-site.xml. With wrong configuration in yarn-site.xml, RM service can be up whereas with wrong Scheduler configuration , service can NOT be up at all. On best effort basis for make service up, handling exception for yarn-site.xml and scheduler configuration are different.
BTW, making RM state StandBy would lead to filling up of the logs very soon because of elector continuous try to make active. Any configuration issue, better to exit the JVM and notify admin that RM is down so that admin can check the logs and identify it.
Sorry I meant we can handle fail fast config being false case same way as we were doing in earlier patches. Otherwise checking for fail fast doesnt make any difference because both the code paths lead to same result.
Moreover, the fail fast configuration doesnt quite work as expected here. If capacity scheduler configuration is wrong, initialization will again fail and JVM will exit, which in essence is exactly same as the other case. We can handle fail fast as true case same way as earlier IMO.
The reason it works in the test(JVM does not exit) is that you have passed CapacitySchedulerConfiguration object to MockRM. As CapacitySchedulerConfiguration is not instanceof YarnConfiguration, this will lead to a new YarnConfiguration object being created and passed to ResourceManager.
When you are changing configuration in test and set queue capacity to 200, it is not reflecting in the Configuration object in ResourceManager class. That is why JVM does not exit when we transition to standby.
Few additional comments :
- Below exception block i.e. exception block after call to refreshAll, if YarnConfiguration.shouldRMFailFast(getConfig()) is true, we merely post fatal event and do not return or throw an exception. This would lead to success audit log for transition to active being printed, which doesn't quite look correct. Because we are encountering some problem during call to transition. We should either return or throw a ServiceFailedException here as well. Although both are OK because RM would anyways be down later but I would prefer exception.
324 } catch (Exception e) { 325 if (isRMActive() && YarnConfiguration.shouldRMFailFast(getConfig())) { 326 rmContext.getDispatcher().getEventHandler() 327 .handle(new RMFatalEvent(RMFatalEventType.ACTIVE_REFRESH_FAIL, e)); 328 }else{ 329 rm.handleTransitionToStandBy(); 330 throw new ServiceFailedException( 331 "Error on refreshAll during transistion to Active", e); 332 } 333 } 334 RMAuditLogger.logSuccess(user.getShortUserName(), "transitionToActive", 335 "RMHAProtocolService"); 336 }
- In TestRMHA, below import is unused.
import io.netty.channel.MessageSizeEstimator.Handle;
- A nit : There should be a space before else.
328 }else{ 329 rm.handleTransitionToStandBy();
- In the test added, assert is not required in the exception block after first call to transitionToActive
- Maybe we can add an assert in test for service state being STANDBY after call to transitionToActive with incorrect capacity scheduler config and fail-fast being false.
To be more clear on the 3rd point, handleTransitionToStandBy call will exit if transitionToStandby fails. This transition may fail because during transition, active services are initialized. CS initialization loads the new capacity-schduler conf which result in wrong default queue capacity value result standby transition failure.
4. Instead of having separate class FatalEventCountDispatcher , can it be made inline?
Thanks Bibin A Chundatt for updating the patch. The patch mostly reasonable!!
Some comments on the patch
- Does {{isRMActive() }} check is required..? If transitionedToActive is success only then refreshAll will be executed!! IAC if you add also then check should be common for both i.e _if_else
- In the Test, below code expecting transitionToActive to be failed? Is so, then it RM state shoud not be in Active state. Why RM will be in Active if adminService fails to transition?
+ try { + rm.adminService.transitionToActive(requestInfo); + } catch (Exception e) { + assertTrue("Error when transitioning to Active mode".contains(e + .getMessage())); + } + assertEquals(HAServiceState.ACTIVE, rm.getRMContext().getHAServiceState());
- Have you verified the test locally? I have doubt that test may be exitted in the middle since you are changing the scheduler configuration. Scheduler configuration is loaded during transitionedToStandby which fails to load and System.exit is called.
Hi Rohith Sharma K S and Sunil G
Thanks for comments.
- So createAndInitActiveServices approach will not take
Second approach sounds good with fail fast.
I have updated the patch as per the suggestion. Please review
I think for any configuration issues while transitioningToActive, Adminservice should not allow JVM to continue. Because if AdminService throws any exception back to elector, elector again try to make RM active which goes in loop forever filling the logs.
There could be 2 calls can lead to point of failures i.e first rm.transitionedToActive, second refreshAll().
- If any failures in rm.transitionedToActive then RM services will be stopped and RM will be in STANDBY state.
- If refreshAll() fails, BOTH RM will be in ACTIVE state as per this defect. Continuing RM services with invalid configuration does not good idea. Moreover invalid configurations should be notified to user immediately. So it would be better to make use of fail-fast configuration to exit the RM JVM. If this configuration is set to false , then call rm.handleTransitionToStandBy.
I had closer look at either of the solutions as above. One of the potential issue in both are
- Moving createAndInitService just before starting activeServices in transitionToActive.
- switch time will be impacted since every transitionToActive initializes active services.
- And RMWebApp has dependency on clienRMService for starting webapps. Without clientRMService initialization, RMWebapp can not be started.
- Moving refreshAll before transitionToActive in adminService is same as triggering RMAdminCIi on standby node. This call throw StandByException and retried to active RM in RMAdminCli. When it comes to AdminService#transitionedToActive(), refreshing before rm.transitionedToActive throws an standby exception.
Hi Rohith Sharma K S
On a second thought, could we move refreshAll in AdminService#transitionToStandby/Active ahead of rm.transitionToStandby/Active
try { // call all refresh*s for active RM to get the updated configurations. refreshAll(); rm.transitionToActive();); }
Hence exception can come before invoking transition methods in ResourceManager class. Thoughts?
Hi Rohith Sharma K S
Thank you for restarting this thread.
The idea of calling createAndInitActiveServices from both ResourceManager#transitionToActive() and transistionToStandby is good . In this case, we can remove the call to refreshAll from AdminService#transistionToStandby.
Hi Rohith Sharma K S
Thank you for your review comments
Will update the same and upload patch soon.
Sorry for coming very late.. This issue has become stale, need to move forward!!
Regarding the patch,
- Instead of setting boolean flag for reinitActiveServices in AdminService and other changes, moving createAndInitActiveServices(); from transitionedToStandby to just before starting activeServices would solve such issues. And on exception transitioningToActive, handle add method stopActiveServices in ResourceManager#transitioningToActive() only.
- Probably we can remove refreshAll() from AdminService#transitioneToActive if the above approach.
Any thoughts?
Instead of checking for exception message in test, can you check for ServiceFailedException
Already the same is verified in may testcases using messages.
Can you add a verification in the test to check whether active services were stopped ?
IMO its not required.
- Instead of checking for exception message in test, can you check for ServiceFailedException ?
- Can you add a verification in the test to check whether active services were stopped ?
Thanks for the patch Bibin A Chundatt. Few comments.
- Nit : Should be "Exception in state transition"
throw new ServiceFailedException( "Exception in state transistion", re);
- IMO, no need to throw ServiceFailedException when catching exception while calling reinitialize. The throw below should suffice. Just set the flag. According to me, we should retain the original exception.
- Add a comment indicating what the flag does.
- Maybe rename the flag to reinitActiveServices instead of reinitialize.
- The flag according to me, semantically speaking, doesn't quite belong to AdminService. Can be in ResourceManager or RMContext. Thoughts ?
- Can you add a test to verify the fix ?
- I think instead of relying on transitionToStandby to change state to standby, we can explicitly change the state in AdminService. Thats because even stopActiveServices can throw an Exception and if it does, state won't change to STANDBY. This call to stop should not throw an exception, but as services keep on getting added you never know how a particular service may behave. We should be immune to it. Try something like below.
((RMContextImpl)rmContext).setHAServiceState(HAServiceProtocol.HAServiceState.STANDBY);
- Just a suggestion. If we do above, maybe call stopActiveServices and reinitialize directly instead of calling transitonToStandby. This is because as I said in a comment above, transitionToStandby would print an audit log saying transition is successful. But reinitialize subsequently may fail. And not printing this audit log will be consistent with transitionToActive failing during starting active services. Thoughts ?
Sunil G, Varun Saxena and Xuan Gong Thanks a lot for comments.
Please review
I am fine either ways though because as you said reiniting really matters when transitioning to active.
Sunil G, even I was suggesting earlier in my comment that we reinit only while transitioning to active.
But then I thought that if we reinit on standby and there is a problem in initialization, a failure can indicate the admin to correct his config. An audit log will be printed.
If we do not reinit, a success audit log on transition to standby would be printed, which may indicate no problem in config to admin.
Thoughts ?
We can procrastinate reiniting till transition to active as well. But its better to indicate a failure even on standby IMHO. I do not see any harm in it.
I am fine either ways because reiniting really matters when transitioning to active.
+1 for using atomicBoolean flag.
Do we really need to call reinitiateActiveService from transitionToStandby. I think it can be done while we invoke transitionToActive when it matters.
Ok, lets add a flag. According to me, we need to check this flag and do reinit even on transitionToStandby even though state is standby.
Thanks for Varun Saxena and Sunil G. I am fine with adding a new internal state although it might be too complex. But if we could handle this correctly, I am fine with this.
To this specific issue, I think that at least two things we should do here:
1) stop All ActiveService
2) transit to standby. (basically, set RM state in RMContext as Standby)
But, we also need to reinitiate all the active service to prepare for the transitToActive call.
At least, we should do:
rm.transitToStandy(false); reinitiateActiveService();
Here the reinitiateActiveService() can throw out the same exception. And I can see why this does not solve the whole problem.
How about we introduce a new atomicBoolean flag to track whether we need to reinitiate active service ? And we could add following into transitToActive logic
if (reinitiateRequired) reinitiateActiveService()
before we start all the active service.
Xuan Gong, issue with reinitialization is that if exception is thrown during initialization then all the active services will be stopped.
And when we transitiontoactive we will directly attempt to start active services which would fail because services are in state STOPPED.
I think we can forcibly set the state to standby and set a flag in RMContext indicating reinit is required whenever attempting transition to standby or active. This way we will let leader election handle the exception.
I remember an earlier suggestion of new HAServiceState.
Introducing a state as WAITING_FOR_ACTIVE may help to do all reinit or other inits when we try to move to ACTIVE. Also as mentioned earlier, this can be hidden state internally. It may look more cleaner than flag. So along with above solution, could we add this new state also?
We do need to stop active services because many threads would be spawned on attempt to transition to active.
Frankly, we can have a additional flag in RM indicating that reinitialization of services is required and attempt them while trying for transition to active. We can stop the services beforehand because no point having some threads running in standby. Thoughts ?
We can do something like below
// Exception was thrown in call to refreshAll. if (rmContext.getHAServiceState() == HAServiceProtocol.HAServiceState.ACTIVE) { ((RMContextImpl)rmContext).setHAServiceState(HAServiceProtocol.HAServiceState.STANDBY); try { rm.stopActiveServices(); // set a flag in RM(maybe rm context) indicating reinit of services is required on trying for transition to active despite state being standby. } catch (Exception ex) { }
Yes but we do need to reinitialize services. Otherwise transition to active when everything is fine will not happen.
Hi Xuan Gong
Yes, we can do that. But I feel now we call rmContext.setHAServiceState(HAServiceProtocol.HAServiceState.STANDBY);
as last statement in transitionToStandby. SO if an exception happens in reinitialize code flow wont reach to set the state as Standby. So we may also need to set the state in context as Standby.
How about first calling rm.transitionToStandby(false), then call activeService.reinitiate() (probably need to create this function) ? At least, the RM will transit to Standby. Even if the reinitiate() throws exception, the leader elector will handle this.
Hi Varun Saxena
Reinitialization of Active Services is required.
For this, I think calling rm.transitionToStandby(true) is not a good idea. Because same exception can come while initializing CapacityScheduler (cs config file).
Hi Varun Saxena
Reinitialization of Active Services is required.
For this, I think calling rm.transitionToStandby(true) is not a good idea. Because same exception can come while initializing CapacityScheduler (cs config file).
Reinitialization of Active Services is required. When you call stop active services, service state for all the services will change to STOPPED.
If this RM were to become active again, we will try to start all the active services and services cant transition to START state from STOPPED state. They can only do so when services are in INIT state.
Varun Saxena and Sunil G . Only need to call rm.transitionToStandby(false) on exception .
Since it handles transition to standby in rm context,Stop active services and not reinitializing queues.
Thanks Varun Saxena and Sunil G first option look good and easier to implement.
But both RM could be in standBy state. but looks like the best option.
Yeah lets go with first option I suggested then i.e. make RM Context as standby and stop active services followed by initialization. That will be easier to implement.
This will resolve the issue.
Thanks Varun Saxena for sharing detailed analysis. Infact we must change the state in context.
IMO, I feel we can stop active services, and move the RM state to Standby. With this, RM will become another candidate for election. If any case when the same RM is selected as active, and if we have good config, then with existing call flow startActiveServices will be invoked. So it should be fine in that case. From UI also, both RM will be shown as Standby too.
For 2nd option, we will have to return STANDBY to client if the state is WAITING_FOR_ACTIVE. So it can primarily be a RM internal state.
Sunil G
We can do the cleanup(i.e. stop active services) when we switch to standby. We do this already. Also cleanup will be done when we stop RM. So this shouldn't be an issue.
What is happening is as under :
Let us assume there is RM1 and RM2.
Basically, when exception occurs, RM1 waits for RM2 to become active and joins leader election again. As both RMs' have wrong configuration, RM1 will try to become active again(and not switch to standby) after RM2 has tried the same.
Now, as the problem is in call to refreshAll, both RMs' would be marked as ACTIVE in their respective RM Contexts. Because we set it to ACTIVE before calling refreshAll.
The problem reported here is that RM is shown as Active when it is not actually ACTIVE i.e. UI is accessible and getServiceState returns both RM as Active. And when we access UI or get service state we check what's the state in RM Context. And that is ACTIVE.
So for anyone who is accessing RM from command line or via UI, RM is active(because RM context says so), when it is not really active. Both RMs' are just trying incessantly to become active and failing.
That is why I suggested that we can update the RM Context. Infact changing RM context is necessary. We can decide when to stop active services, if at all.
So there are 2 options :
- We can set RM context to standby when exception occurs and stop active services. But if we do it, this would mean we will have to redo the work of starting active services again if this RM were to become ACTIVE.
- Introduce a new state (say WAITING_FOR_ACTIVE) and set this state when exception is thrown and check this state to stop active services when switching to standby. And not starting the services again in case of switching to ACTIVE.
Thoughts, Sunil G, Xuan Gong ?
refreshAll() is doing many set of refresh operations. And exception may come from any state. Its better to gracefully close those. So setting state directly wont help much, we may need to go through part of transitionToStandby.
Maybe set the HA service state in RM context as STANDBY upon throwing the exception. Or not set it to ACTIVE till the all active services are actually started.
We primarily check RM context to make the decision about whether RM is in standby state or active.
Hi Xuan Gong
Thank you for the update. I have a doubt here.
If we call rm.transitionToStandby(true) , then it will result a call to ResourceManager#createAndInitActiveServices().
So is it possible that we may get the same exception which we got from refreshAll call earlier. Specifically queue reinitialize. Currently the CS#serviceInit will call parseQueues. As mentioned here, Bibin A Chundatt used a wrong CS xml file.
How about add rm.transitionToStandby(true) before we throw the ServiceFailedException in catch block ?
try { rm.transitionToActive(); // call all refresh*s for active RM to get the updated configurations. refreshAll();); }
In that case, we could transit the RM to standby, and since we throw out the ServiceFailedException, this RM will rejoin the leader election process.
Thnks Sunil G for checking the issue. In this jira we should decide how to handle RefreshAll() failure during transistion to active. The configuration mistakes like capacityscheduler.xml , Acl , user group mapping can cause both RM active case during switch due to zk connection error probably.
At runtime i am not sure again we will be able to recover once this happens.
Capacity scheduler causing this case is one of them.
YARN-3894 contains the CS xml.
Thank you Bibin A Chundatt. Could you please attach CS xml too.
Updated description since the cases can happen in many cases . Please do correct me if i am wrong
Thanks for reporting this. Could you share the YARN configurations, please ? Bibin A Chundatt
FAILURE: Integrated in Hadoop-trunk-Commit #9060 (See)
Add
YARN-2975, YARN-3893, YARN-2902and YARN-4354to Release 2.6.4 entry (junping_du: rev b6c9d3fab9c76b03abd664858f64a4ebf3c2bb20) | https://issues.apache.org/jira/browse/YARN-3893 | CC-MAIN-2017-43 | refinedweb | 4,521 | 58.58 |
Ext.Direct and ColdFusion 9.0.1
Ext.Direct and ColdFusion 9.0.1
EDIT: This problem has been fixed with CF 9.0.1 hotfix 1. Installing it should solve the problem.
Original post:
Due to the ridiculous decision by Adobe to change ColdFusion's serializeJSON() function to serialize all data as strings in 9.0.1 (instead of numbers when applicable), Ext.Direct breaks.
What happens is that callbacks won't be executed at all if a decimal is being added to 'tid' in the router (ex: changed from "1" to "1.0"), and even if you cast that back to an int (to bring it back to "1"), callbacks won't be executed in the correct scope due to the string 'len' in the method descriptions of the API.
Here is an override that should restore the correct Ext.Direct functionality. Works with Ext 3.2.1.
Code:
Ext.override( Ext.direct.RemotingProvider, { initAPI : function(){ var o = this.actions; for(var c in o){ var cls = this.namespace[c] || (this.namespace[c] = {}), ms = o[c]; for(var i = 0, len = ms.length; i < len; i++){ var m = ms[i]; m.len = +m.len; // Make sure len is a number. Using unary + to coerce cls[m.name] = this.createMethod(c, m); } } }, getTransaction: function(opt){ return opt && opt.tid ? Ext.Direct.getTransaction( +opt.tid ) : null; // unary + to coerce to number } } );Code:
<link rel="stylesheet" type="text/css" href="ext-3.2.1/resources/css/ext-all.css" /> <script type="text/javascript" src="ext-3.2.1/adapter/ext/ext-base.js"></script> <script type="text/javascript" src="ext-3.2.1/ext-all.js"></script> <script type="text/javascript"> // Include Override Here </script> <script type="text/javascript" src="api.cfm"></script> <script type="text/javascript"> Ext.Direct.addProvider( Ext.ss.APIDesc ); </script>
Hope this helps someone in need.
-Greg
Last edited by Gjslick; 30 Sep 2010 at 8:40 AM. Reason: Added link to hotfix which should solve the problem.
Where do I put this? We also just upgraded to 9.0.1 and all of my grids are showing a couple of rows and then they're empty.
Hmm, I'm surprised that your grids are even showing a couple of rows. My grids were all just coming up blank. It might be a different problem in your app.
But what you would do is put the override after you include the Ext files onto your page (after the <script> tags), but before you add the API description to Ext.Direct.addProvider(). I'll update my original post with some instructions actually.
If you're still having trouble though, post again and I might have a few answers for ya. Been working with this for a little while now :) Otherwise, hope the override helps.
-Greg
Yeah, that didn't help. Before the upgrade to 9.0.1, all the json responses were good, now they're coming back with only the first two objects, the rest as strings.
For example, here is the response I get back from one of my calls:{"success":true,"isCached":false,"dateGenerated":"August, 26 2010 17:34:03","results":"19","retrievalTime":0,"data":[{"description":"","roleId":"17","importance":"0","title":"academicAdministrator"},{"description":"Admin Editor","roleId":"5","importance":"100","title":"ADMIN_EDITOR"},"","","","","","","","","","","","","","","","",""]}
Ah, yeah, ok, then this override wouldn't have much to do with your particular problem! I was actually going to ask if your responses are still coming back the same. I'm not even sure what would cause just the first two objects to be built and the rest to come back as empty strings...
How are you creating this data? Are you the using a query or the ORM and serializeJSON()? Is it possible for you to post some code?
Yes, it's ORM with serializeJSON. I have read a couple of people having problems with that now with 9.0.1. Do you have any idea why? I'll have to post some code tomorrow.
The only problem that I've really ran into so far is when using inheritance mapping with the ORM. In 9.0.0, using serializeJSON() (or cfdump) would show all of the properties of both the base class, and the subclass, for a particular object. In 9.0.1, they only show the properties of the subclass. I don't know if that would be your problem though, because serializeJSON() should at least still be placing curly braces into the json for all of the objects, instead of just empty strings.
Are all of the objects that are coming back from the ORM query correct inside of ColdFusion itself? Try cfdump'ing the result of the query or entityLoad, and see if everything is there. I'm wondering if it is a problem with the ORM retrieval of the objects themselves, or a problem with serializeJSON().
-Greg
FYI, this hot fix fixed the problem:
Specifically the last bug fix: "serializeJSON incorrectly serializes nested objects."
Hey, thanks for that update. I didn't even realize that there was a hotfix out now because I actually ended up downgrading back to 9.0.0 because of that numbers->strings issue. And it looks the last bug that was fixed was your problem :) I'll update my original post and add the link. Thanks again.
-Greg
Similar Threads
Simple Ext.Direct-Combobox plugin --- with Ext.Direct.Store for reuseBy xp743 in forum Ext 3.x: User Extensions and PluginsReplies: 1Last Post: 26 Jul 2010, 11:56 AM
populate ext.form.combobox (ext 2.) from an sql database using coldfusion 9By sarahmfr in forum Ext 3.x: Help & DiscussionReplies: 0Last Post: 25 May 2010, 1:27 PM
Ext JS 3.0 Grid with ColdFusion 8/9By kumarshah in forum Community DiscussionReplies: 4Last Post: 20 Jul 2009, 2:26 PM
Ext.Ajax and ColdFusionBy captdan78 in forum Ext 2.x: Help & DiscussionReplies: 2Last Post: 10 Jul 2008, 1:07 PM
Coldfusion & Ext 2.0By dawesi in forum Community DiscussionReplies: 3Last Post: 18 Dec 2007, 8:21 AM | https://www.sencha.com/forum/showthread.php?108022-Ext.Direct-and-ColdFusion-9.0.1 | CC-MAIN-2015-18 | refinedweb | 1,006 | 68.06 |
Slashdot Log In
PHP5: Could PHP Soon Be Owned by Sun?
At first glance, the obvious changes to PHP are a result of the success of the Java platform and the weaknesses of PHP revealed in comparison. With the release of PHP 5, it's apparent that the developers of PHP and the Zend Engine (essentially a single group) felt compelled to make PHP much more like Java with respect to object-oriented programming. Zend, the Israeli company backing the development of PHP, promises on their web site that "full integration with Java will be closer than ever before." Hmmm, full integration with Java, huh?
On November 4th, 2003, Zend reveals the absolute cloning of the Java object model within PHP 5. From throwing exceptions to static variables, PHP 5's object model mimics Java in concept all the way to the syntactical level.
This is great for enterprise developers using Sun products, but with the release of PHP 5, what does this mean for the half-million PHP developers worldwide who have depended on PHP for open-source development,?
On the positive side, this edition of PHP does bring improved performance and a new suite of MySQL functions. Backward incompatibility is limited to a list of ten issues. Additionally, there are only minor configuration file changes that need to be made to the web server. Several directives have been introduced for configuring php.ini files, mainly dealing with hashes for encryption., developers "there.
PHP overview at K5 (Score:4, Informative)
PHP also not an ASF project any longer (Score:5, Informative)
()
Don't know if this is really relevant, but as is noted in the Section 5.G of Feb 2004 ASF Board meeting minutes [apache.org], the PHP project is terminated and rights for PHP will be tranfserred to the PHP group.
Fork it (Score:5, Insightful)
(Last Journal: Monday February 23 2004, @04:55PM)
See that's why Open Source is different than proprietary software. It's not just another choice, it's fundamentally DIFFERENT. Nobody can take the software and force it down a direction you don't like because you and like-minded individuals can take it in the direction you like.
Re:Fork it (Score:4, Insightful)
Seriously consider the differences between say, Microsoft forking HTML, and GNU forking ANSI C. I know that the Linux kernel can pretty much only be compiled by gcc since the kernel depends on gcc proprietary extensions, yet feel outraged that a company dare to do the same to a (wildly) popular markup language.
Educational. (Score:5, Interesting)
()
This is the beauty of open source. It defies this kind of corporate grab.
Not sure I agree (Score:5, Interesting)
( | Last Journal: Saturday February 19 2005, @07:01PM)
"There are private and protected members and methods, abstract classes and interfaces, in practice, identical to Java's object model."
A ton of languages treat classes like this. This is really pretty standard. The underpinnings of the way PHP handles classes may be like Java, which makes sense because Java does it pretty well, but as far as the developer is concerned, it's just like a host of other languages.
"companies whose coffers are already overflowing"
Sun's coffers are not exactly overflowing
"Java became successful for a reason: it's intelligently designed and facilitates code reuse."
exactly. why shouldn't php do the same?
"Instead of passing the actual object itself, PHP's object model passes by reference"
This has been deprecated for some time - most PHP developers knew this was coming and had php.ini configured to do this by default already. This has nothing to do with my point, but is an interesting side note.
"if PHP is a developer's primary language and he or she hasn't been introduced to the world of static variables, public and private methods"
oh come on. This is CS 101 stuff...how many serious PHP developers could there be who don't know that stuff?
not quite. (Score:5, Insightful)
As for your final comments--all too many PHP developers don't know "CS 101 stuff", serious or no. Also, I know that when I first learned about the OO methodology, it was quite confusing. Now that I know more about it, I'm convinced that there's a lot there to be avoided, and all of it should be carefully considered.
Fortunately, (like the crippled "object system" in PHP 4) if you don't want to use it, you still don't have to use it.
PHP and MySQL? (Score:4, Interesting)
()
Maybe you don't realize this, but PHP supports quite a range of database products. The fact that it seemed to favor MySQL over the rest didn't really help anything; it just made more people use a product they wouldn't necessarily have chosen on its own merits alone, and directed more programming/bug-fixing toward that one product. Postgresql or Firebird, SAP or Oracle
And as to MySQL, remember it's not as free as the rest. Like Qt and MySQL (Trolltech and MySQL AB) are both using dual-licensing to make their products "free" for some use, but not for others. MySQL's client libraries are GPL rather than LGPL, which makes using them for corporate projects less
Re:PHP and MySQL? (Score:4, Informative)
()
This is complete FUD. It doesn't matter that PHP uses the GPL'd MySQL client library. Code running under the PHP interpreter is not affected by the GPL.
Additionally, GPL incompatible applications can use the GPL'd MySQL client. They simply cannot statically link with it or distribute the client library. The user of the application would have to provide the library. Dynamically linking to a library does not cause any (copyrighted) code to be copied into the application. You have never needed a license to use a shared library.
Zend vs Rasmus (Score:4, Interesting)
( | Last Journal: Wednesday February 11 2004, @12:21PM)
The PHP group is 9 guys across the globe. Zend is a strong force and helpful. I like the syntax changes that make PHP more like Java, but I don't want to see any company own PHP.
Luckily, it's not gonna happen.
-Jackson [jaxn.org]
Yeah, Right! (Score:2, Funny)
Besides, you missed the real threat: Given the similarity between PHP 5 & C# object models, it's obvious the evil empire is trying to take over PHP (or maybe Sun is trying to control C# as well).
Evil Plot? (Score:4, Interesting)?
A - Which of these companies have overflowing coffers?
B - It's open source. If someone wants to contribute then they can contribute. If someone wants to profit then they can attempt to profit. I don't see why a company that contributes shouldn't have the opportunity to profit somehow.
C - Nothing says that PHP can't be forked back towards the little web scripting engine that was once PHP and PHP/FI before that.
Too many futures (Score:4, Interesting)
It's nice to know where the OO model comes from, gives it more credence
Is talking about Sun, Macromedia and MySQL horning in on the action is like chicken little proclaiming the sky is falling
Food for thought
p.s. I work for a company that produces commercial tools for PHP development
PHP5 from Java? (Score:2, Funny)
That and a dash of paranoia.
Growing a Language (Score:5, Insightful)
I like the approach Python has taken. Everything is kept clean and simple, and the complexity is added through importing modules. Need another function? Import it! I guess that's why Python is said to "fit in your head".
I'd better stop before I start a flame-war. The point I wanted to make is PHP and Java will both probably collapse under their own weight, and another simpler language will take their place. If the plan is the grow PHP into Java, then there will be tonnes of books needed to reference everything, which is good if you want to sell books, but bad if you want to write programs without having to constantly look something up.
It seems to me that a programming language needs to plan for growth before it starts, otherwise it grows and gobbles up the mental resources of the programmers using it. Once it's too big, people will just fall back on simpler tools.
cruft (Score:5, Interesting)
()
now don't get me wrong, i'm not bashing php. i use php all the time and it is a pretty straightforward tool and quite easy to pick up. the inevitable problem with trying to reform a language is that you need to "break" it in order to fix it
Re:cruft (Score:5, Interesting)
()
Re:cruft (Score:4, Insightful)
In the movie "City Slickers" Jack Palance's character quips that the secret to life is just one thing, and once you know what that one thing is, everything else makes sense. I'm beginning to think that programming languages are the same way. The "one thing" about Visual Basic was introducing components. Perl's one best thing is powerful reporting capabilities. Python's contribution is namespace (just type 'import this' into the interpreter for an easter egg's explanation).
PHP's "one great thing" seems to be initial ease of use. It's dead simple to install, the php website's documentation is second-to-none, and it's relatively painless to cut-and-paste code inside HTML to make stuff work. My problem, however, is the same complaint I have with the Windows operating system: PHP is impossible to master, because it's becoming too broad with too many functions and too many special cases.
According to [tnx.nl] there are 3079 core functions in PHP4 (as of november 2003), compared to 206 in perl.
3079? That's just seems insane to me.
duh (Score:5, Insightful)
( | Last Journal: Wednesday January 21 2004, @08:36PM)
Go FUD yourself (Score:5, Insightful)
( | Last Journal: Wednesday November 24 2004, @02:50AM)
So, Zend was good, and SUN & Co are bad now? (Score:2)
()
As far as I understand from your review, PHP development was directed not by a non-profit (think Apache Foundation), but by a business (think MySQL and Zope) for a while now, and this was OK from your POV until some bigger businesses offered investment in Zend. Can you reasonably justify (at least to yourself) why Zend was OK and "Sun, MySQL, Borland and Macromedia" are bad-bad-bad corporations? Especially since MySQL could not be MUCH bigger, right?
If there were a Microsoft connection, I might don on my tinfoil hat (not just because they are BIG, but because of their monopolistic practices).
Paul B.
Corporate Shill Indeed... (Score:1)
()
It seems to me that the situation he's describing here is much like RedHat 'shilling' Linux for profit. Now, of course, I recognize that Linus et al aren't part of RedHat, but the end effect is the same, really, just further down the timeline.
PHP is at the state right now where Linux was before people started forking off various distros.
I'm not particularly concerned that PHP will become 'commercialized' any more than I am that Perl is 'commercialized' by ActiveState. The language and the community is more than the sum of a few key people, or even companies involved. The community around it is the main strength, and it's that ubiquitiousness that's the core of PHP, not the language itself.
No Such Evidence (Score:3, Insightful)
( | Last Journal: Saturday March 15 2003, @01:22PM)
For years I have been asking for concrete examples of OO producing code reuse in the business domain, but have yet to see a convincing example. OO does NOT have objectively demonstratable magic properties (except in a few narrow conditions that I don't encounter very often).
Some people pick OO because they personally like it, NOT because it is objectively better. Enough with the OO hype.
Macromedia? (Score:1)
ColdFusion, is a product of Macromedia. Is that the deal hiring advisors from Macromedia, or advisor itself is Macromedia?
Amazing similarities (Score:5, Funny)
does OSS help corporations or individuals most? (Score:1)
(Last Journal: Thursday November 23 2006, @02:30PM)
Especially those on the wrong side of the tracks, or the border lines, affected by the "digital divide".
I tried installing Linux for internet access on an old P5, just the way it was, without changing any hardware.
Winmodems are what the poor have. They don't work on Linux.
Ethernet is what corporations have. It works.
One more clone? (Score:1)
Re:PHP needs to be re-implemented under GPL or BSD (Score:5, Informative)
And the PHP license you are quoting is old. Look at. I hope you trust urls to php.net
Andi Gutmans | http://developers.slashdot.org/developers/04/08/05/1915255.shtml | crawl-001 | refinedweb | 2,152 | 62.48 |
Attribute Routing in ASP.NET MVC
Introduction:
Attribute Routing is an
important feature of MVC web applications. Routing enables the use of URL that
are described by user action and more understood by users. We can hide
information which is not shown to final users in a URL bar.
Routing is how my application matches a URI to an action. A URI is a combination of URL (Uniform Recourse Locator) and URN (Uniform Recourse Name). MVC supports a new type of routing, called attribute routing. As the name implies, attribute routing uses attributes to define routes. Attribute routing gives to more control over the URIs in our web application.
A route is a URL pattern that is mapped to a handler. The handler can be a physical file, such as an .aspx file in a Web Forms application. Routing module is responsible for mapping incoming browser requests to particular MVC controller actions.
1. URL Keep Clean:1. URL Keep Clean:
For example, we are using conventional URL structure are ‘? studentId=10&view=details’
But in MVC, we are defining a URL Structure much cleaner and understood by
End user as like this: - ‘https: // Students /details/10’
2. For URLs discoverable by end-users.2. For URLs discoverable by end-users.
3. We have avoid database IDs in URL.3. We have avoid database IDs in URL.
4. Clean URLs is good for SEO. And various one.4. Clean URLs is good for SEO. And various one.
By default, we are given a route that will probably match most, if not all of your routes, and are asked to define if there are any more specific routes we would also like to handle.
In below are a by default, routes might look like this:
public class RouteConfig
{
public static void RegisterRoutes(RouteCollection routes)
{
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.MapRoute(routes.MapRoute(
name: "Default",
url: "{controller}/{action}/{id}",
defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional }
);
}
}
In above, routes is a Route table of type RouteCollection type, i.e. is stored all possible routes of the given URL pattern and specified controller in the application. If we want to add new route then we are using MapRoute that’s added the route in the Route table. IgnoreRoute is ignored URL structure that is not matched with specified one. We have also defined a Optional URI Parameter with the using a question mark (?). For Example, Route (“home/ {id?}”) in controller side action.
Defining Routing in ASP.NET MVC:
We can define routes at controller level, which apply to all actions within the controller unless a specific route is added to an action.We can define routes at controller level, which apply to all actions within the controller unless a specific route is added to an action.
Route Constraints:
We restrict how the parameters in the route template are matched. For example:We restrict how the parameters in the route template are matched. For example:
I have shown how to search certain student by their id. Now, let's say we also want the end user to search student byname.
Student/3 render view with id
Student/Anupam render view with Name
For byname, we can add another ActionResult which receives a string parameter. But it is not sufficient find by name so since we can explicitly define a route for diverting to by name action.For byname, we can add another ActionResult which receives a string parameter. But it is not sufficient find by name so since we can explicitly define a route for diverting to by name action.
[Route("Student/{id:int}"]
public Student GetStudentById(int id) { }
[Route("Student/{name}"]
public Student GetStudentByName(string name) { }
Here, the first one is only applicable when the "id" segment of the URI is an integer. Otherwise, the second one will be chosen.
What is better?What is better?
In the developer opinion, Attribute Routing is better vs Convectional Routing, because it will give a more flexible over conventional. But, if we want to better security and routing parameter in controller level as well as route table both are playing individually an important role, so in this case, the decision is yours.
Additional Resources: | https://www.mindstick.com/Beginner/11998/attribute-routing-in-asp-dot-net-mvc | CC-MAIN-2017-34 | refinedweb | 700 | 66.74 |
Advanced Namespace Tools blog
17 December 2017
Updating ANTS via diff/patch scripts
I had another longer than intended break from ANTS/plan 9 development, so that means another round of updating systems and updating ANTS to build, install, and work with latest 9front. I had already put a bit of effort into making this process easier, by preparing a script that generates a set of diff files of ANTS modifications from the original versions. The next step is creating a script that uses those diff files to automatically update the current 9front version with the ANTS modifications.
This type of procedure is not, of course, fully reliable. Since diff/patch doesn't actually understand source code in any way, even if it can patch files successfully, there is no guarantee things will compile, or if they compile, that they will work properly. An automated process like this cannot substitute for the work of understanding changes to 9front and how they interact with the ANTS modifications. It can, however, automate some trivial busywork and serve as a starting point.
Updating part 1: find errors by attempting a build
I start in the simplest possible fashion: just trying to build ANTS against the current 9front tree. Unsurprisingly, this grinds to a halt amidst a slew of errors.
getqlp: incompatible type signatures 111c6c2b(util.8) and a3ca55bd(/386/lib/libc.a(qlock)) for qlock qlock: incompatible type signatures 111c6c2b(util.8) and a3ca55bd(/386/lib/libc.a(qlock)) for qunlock
Interestingly, this error is a bit different than what I was expecting - it is a linking error. What is going on is that ANTS now has a slightly modified libc needed to support the new rfork flag used by private srv namespaces. There is some poor sequencing in the current build script, where that libc is only copied into the main system at the point that the modified version of rc is built. That means everything compiled after that modification will be in conflict with the pre-existing compiled libraries. The solution is to either move the modified libc in/out for building the modified rc, or force a full library rebuild. The former solution is certianly more minimal, but in theory the new rfork flag should be available to any application that wants to use it, so the latter makes more sense overall, and is what I chose to do.
Updating part 2: creating a patching script
Because the diffing script already exists, making a patching script is mostly a matter of using it as a template and performing the corresponding actions. The desired flow is:
copy current patched versions such as foo.c to foo.c.old copy the current system files from /sys/src into the ants patched directory apply the diffs stored in the diffs directory to the current versions
This was all just some snarf/paste/sed/tweak munging of no particular interest. The main issue that cropped up trying to run the script was that the kernel formerly known as 'pcf' is now known as 'pc' so the filenames need to be changed to reflect this, along with references to 9pcf in the build script. Next, I'll make an attempt to build using the newly patched files.
Success! At building a modified 9pc kernel, at least, the real tests will be rebooting and checking functionality.
Updating part 3: testing and debugging
The reboot was more or less successful. There was an error printed in early bootup about xd not existing, but other than that things have been working as expected. Since the error was printed during the portion of bootup where drive partitions are being enumerated, I suspect it has to do with the difference between the original (slightly modified) bootup script and the ANTS plan9rc script. Adding 'bootcmd=plan9rc' to the plan9.ini fixed the error, but leads me to believe that the 'no xd' errors are a pre-existing bug in what ANTS provide to the early boot environment and the assumptions of the standard bootscript.
After fixing the 'no xd' error, I also noticed that the modified bootrc script (as opposed to the standard ANTS plan9rc script) has an outdated use of p9sk1 rather than dp9ik for the early namespace key. Another simple fix.
Apart from that, not much has needed work. A manual inspection of the diffs files to make sure they looked sane revealed that I had left an accidental reference to the old pcf kernel config file in, so that generated another fix and rerun of the diffing script. Given how much time I used to invest in laboriously snarf/pasting my modifications into new 9front versions, I'm a bit chagrined that I hadn't created a semi-automated diff/patch updating system previously. Despite that fact that some supervision and double-checking is necessary, it is probably a lot less error prone than doing it all by hand.
The mycroftiv/antsexperiments bitbucket repo is now set up to build properly against 9front revision 6268. All that remains to be done is deciding exactly how the website should be updated. I generally try to sync the plan9ants repository to the most recent 9front iso image, and let antsexperiments diverge from that and/or track hg tip, but it seems like unnecessary work to create a slightly different version based on an iso which is older than the hg tip I was using as my update basis. Perhaps I should just change the instructions to tell the user to install from the iso and update to 9front tip. | http://doc.9gridchan.org/blog/171217.update.patch | CC-MAIN-2021-21 | refinedweb | 929 | 57.91 |
During Build 2015 Microsoft announced a bunch of new tools aimed at helping developers build cross platform applications. Amongst the announcements, they let us know that ASP.NET was now available and ready to run on Mac and Linux natively.
Up until this point there has been a few different ways to get .NET applications running on Unix systems but none of them were truly native or supported by Microsoft.
With this announcement and the release of Visual Studio Code—Microsoft’s cross platform development tool—you can now develop cross platform .NET applications on your favourite operating system.
Today I will show you how to get started with setting up your .NET development environment on a Mac running Yosemite and show you how to build a Console and an ASP.NET MVC 6 call log application using Visual Studio Code and ASP.NET 5.
Feel free to download all the code from the Github repository if all you want to do is setup your local environment and not worry about writing all the application code.
Our tools
- A Mac computer running Yosemite or above
- Homebrew package manager
- Node.Js
- A Twilio Account and a Twilio Phone Number – Sign up for free!
Setup
To get started we need to make sure all the necessary tools are installed. If you are running on a Mac and still don’t have Homebrew installed you’re missing out, so download and install it from brew.sh.
Once homebrew is installed we can go ahead and download .NET Version Manager. This will install everything we need in order to run .NET as well as some useful tools we will talk about briefly.
brew tap aspnet/dnx brew update brew install dnvm
When the installation completes the following will have been installed for you:
- DNVM (.NET Version Manager): is a set of command line instructions which allow you to configure your .NET Runtime. We can use it to specify which version of the .NET Execution Framework to use.
- DNX (.NET Execution Framework): is the runtime environment for creating .NET applications for Windows, Mac and Linux.
- DNU (.NET Development Utility): is a set of command line tools that allow us to build, package and publish projects created with DNX.
Make sure that all DNVM goodies are available on your terminal instance by running source dnvm.sh. For any future instances just run the following to permanently add it to your bash profile or equivalent according to your environment.
echo "source dnvm.sh" >> ~/.bash_profile
Let’s go ahead and check that DNVM is actually installed by running the following:
dnvm
If you see a screen like the above you know you have DNVM properly installed, so let’s install the latest version of DNX.
dnvm upgrade
We know our .NET Virtual Environment is now installed, so it is time to install Yeoman. Yeoman is a command line application that generates project scaffolds. It offers generators for a plethora of programming languages including .NET.
To install Yeoman first we need to make sure we have Node.js installed. You can check that by running the following on terminal. You should see your Node.js version if it’s already installed.
node -v
If you don’t have it installed the good news is you can also install it using homebrew by issuing the following terminal command.
brew install node
npm install -g yo
We now need to make sure we have the Yeoman .NET generator downloaded by running:
npm install -g generator-aspnet
There is just one more thing we need to do, which is install a sweet IDE that will give us all the awesome functionality we get from Visual Studio, but now on a Mac.
Go ahead and download and install Visual Studio Code. You can find more information about extra setup functionalities here.
Building a console application
With all of the necessary tools we installed let’s build a command line application to make sure everything works as expected.
Start by going back to your terminal and running:
yo aspnet
It will prompt you to choose a project type and enter a project name. Choose Console Application and call it TwilioCallLogConsole. When you press enter, Yeoman will scaffold a console application project for you. You don’t need to run any of the other commands suggested on the screen at this point.
Open up Visual studio Code and choose File > Open, and select the folder where your project was created. Or if you followed the extra instructions from the Visual Studio Code website you can just run the following:
cd TwilioCallLogConsole && code .
When it finishes loading you will notice a notification at the top of Visual Studio Code will appear telling you about unresolved dependencies. Ignore that until you open up the file project.json and add a dependency to the Twilio .NET Library to it.
"dependencies": { "Twilio": "4.0.3" },
Go ahead and click the Restore button and all the dependencies will be installed.
Once that completes we can modify our application to fetch data with Twilio’s REST API.
Open Program.cs and add a reference to the Twilio library at the top.
using Twilio;
In that same file change the Main method to the following:
public void Main(string[] args) { // Instantiate a new Twilio Rest Client var client = new TwilioRestClient("your-twilio-account-sid", "your-twilio-auth-token"); // Select all calls from my account var calls = client.ListCalls(new CallListRequest()); // Check for any exceptions if (calls.RestException != null) { throw new FormatException(calls.RestException.Message); } // Loop through them and show information foreach(var call in calls.Calls){ var detail = String.Format("From: {0}, Day: {1}, Duration: {2}s", call.From, call.DateCreated, call.Duration); Console.WriteLine(detail); } }
Don’t forget to replace the account sid and auth token with the values from your account dashboard. We’re creating a new object for the Twilio Rest API, listing the calls, looping through each one of them to show who started it and when, and showing how long they took.
Save this, and back on the terminal type:
dnx run
You can also do this straight from Visual Studio Code by hitting ⇧⌘P and typing dnx run. A new Terminal instance should open and run the application for you.
Congratulations, you have just built your first .NET command line application on a Mac and the setup was much easier than it would have been on Windows.
As you saw it’s pretty straightforward to build command line applications on a Mac with .NET, but how about building ASP .NET MVC apps? Stay with me as we’re just about to do that.
Building a .Net MVC application
Back on the terminal let’s get Yeoman to scaffold a new application of type Web Application Basic [without Membership and Authorization] and call it TwilioCallLogMVC.
Now that the application layout has been scaffolded, open it up with Visual Studio Code. Before we do anything we need to make sure all the dependencies are installed, so hold ⇧⌘P and type dnu restore.
Once all packages have been restored hold ⇧⌘P again and type dnx web to start your local webserver. On your browser you can now go to.
We have a basic ASP .NET MVC application running on a Mac but let’s add some extra functionality to it and reproduce our command line application with it.
Open up project.json and add a Twilio dependency to it. Notice Visual Studio Code finds the dependency for you automatically as soon as you start typing it.
Once that’s done, make sure you run dnu restore again so the dependency is downloaded.
Open Controllers/HomeController.cs and add a reference to the Twilio library at the top.
using Twilio
Change the Index endpoint to accept a string called phoneNumber. Then add the code needed to interact with the Twilio Rest API. Also don’t forget to replace the account sid and auth token with the values from your account dashboard.
public IActionResult Index(string phoneNumber) { // Instantiate a new Twilio Rest Client var client = new TwilioRestClient("your-twilio-account-sid", "your-twilio-auth-token"); // Select all calls from my account based on a phoneNumber var calls = client.ListCalls(new CallListRequest(){To = phoneNumber}); // Check for any exceptions if (calls.RestException != null) { throw new FormatException(calls.RestException.Message); } return View(calls.Calls); }
We have done one different thing here which is allowing for filtering the results. Now we need to modify the view so it knows how to display information about our calls.
To do that we will bind the view to the Twilio.Call model and show the user a form where they can enter a telephone number to do the filtering.
Open Views/Home/Index.cshtml and replace its contents with the following change:
> } </div>
This will show you a form where you can type a telephone number but we don’t have a way to display the results yet. Let’s change that by adding a table and a loop to go through the results and show one row per entry.
> } <br><br> <table class="table"> <tr> <th> @Html.DisplayNameFor(model => model.To) <th> @Html.DisplayNameFor(model => model.From) </th> <th> @Html.DisplayNameFor(model => model.DateCreated) </th> <th> @Html.DisplayNameFor(model => model.Duration) </th> </tr> @foreach (var item in Model) { <tr> <td> @Html.DisplayFor(modelItem => item.To) </td> <td> @Html.DisplayFor(modelItem => item.From) </td> <td> @Html.DisplayFor(modelItem => item.DateCreated) </td> <td> @Html.DisplayFor(modelItem => item.Duration)s </td> </tr> } </table> </div>
If you now run this again in your browser you should see a form asking you to enter a phone number and some data showing all your call logs. If you enter a phone number you own on the form and press search it will then filter the table to only return entries for that number.
All you need at your fingertips
This is how easy it can be to build .NET applications on a Mac or any other Unix environment. Even though the applications we’ve just built are fairly simple, it was fun and pleasant to build and run them on a Mac.
How about trying to run your existing applications on a Mac and seeing if they’re already cross platform? Chances are you are already closer than you think.
I would love to see what you come up with. Hit me up on Twitter @marcos_placona or by email on marcos@twilio.com to tell me more about it. | https://www.twilio.com/blog/2015/08/getting-started-with-asp-net-5-and-visual-studio-code-on-a-mac.html | CC-MAIN-2020-45 | refinedweb | 1,746 | 65.83 |
Sylvain Wallez wrote:
> Tim Larson wrote:
>
>> On Sun, Nov 07, 2004 at 09:29:56PM +0100, Sylvain Wallez wrote:
>>
>>> Tim Larson wrote:
>>>
>>>> On Fri, Nov 05, 2004 at 09:58:43PM +0100, Sylvain Wallez wrote:
>>>>
<snip />
>>> So, why do we need *both* versions? Isn't it FS? Can you give some
>>> examples that justify this need? Up to now, I personally never had
>>> the need for evaluated conditions. I sometimes would like to use
>>> types other than String, and that can easily be done by associating
>>> adding a convertor that defines how values for the different cases
>>> are to be converted to typed values.
>>>
>>
>>
>> Your converter idea for handling other datatypes sounds good.
>> I personally only need the simple switch version that references
>> a widget (via a path passed to lookupWidget()) for the switch
>> value and selects the case which has the id matching the value.
>> Others requested the expression-on-every-case version, so they
>> would have to supply usecases for that version.
>>
>>
>
> Good. So people, if you need expression on every case, please speak up!
>
I remember suggesting that, but it's not driven from a concrete use
case, just a personal feeling of simplicity (let's not call it FS he :-))
just the general observation that a construct of the kind
switch (variable_to_check)
case value_1:
case value_2:
case value_3:
default:
really is a special (simpler) case of some
if (test1)
else if (test2)
else if (test3)
else
rather then the reverse, but that is a personal feeling: if you really
have more then one variable driving the decissions than the switch
construct forces you to invent a formula producing some 'virtual'
switch-variable, if not then the if-else-if is probably just a bit more
typing
also, observing that deep down the implementation will be mapped on an
if-else-if anyway it might be just doable
so far for the context, I understand you guys push for a decission so:
- I dont' think the usability would be limiting that much either way
(one can always find use cases that will argument in favour of one or
the other, so *shrug*)
- as agreed on the hackathon: going forward towards cforms-stable is far
more important then nth degree of slickness in feature-completeness
> Also, if that need is because the case values are computed and not only
that's indeed the only point
> a single widget's values, that can be modelled by a <fd:output> widget.
so we have a workaround that can be documented
> And the initialize() stuff Tim added will allow to finish the on-create
> event handler I started to implement, thus allowing computed widgets to
> register themselves as change-listeners on widgets their value depends on.
>
cool
>>> Furthermore, what expression language will be used? This leads us
>>> back to the discussion about expressions in JXTG, and the necessary
>>> unification of expression evaluation within Cocoon as a whole. I'm
>>> also not a fan of xReporter's expression language which is really
>>> specific to CForms.
>>>
>>
>>
>> I got stuck on this point also. Perhaps someone with a usecase
>> for the e-o-e-case version could comment?
>>
>>
>
> :-)
>
I agree: the issue of which expression language to take is another
(practical) counter-argument
(although making the observation that something like this starts
limiting our design-freedom really suggests that we should do something
towards hiding all those expr-script-languages floating around in cocoon
behind some generic interface, no?)
>>> Also, there are some points I'd like us to to formalize.
>>>
>>> 1/ The wiki uses "choice" and "case" for the definition and "choose"
>>> and "when" for the template. IMO, this is confusing and we should
>>> have the same wording in the definition and in the template.
>>>
>>
>>
>> I would use the same names in template, model, and binding.
>> "choose/when" seemed to me to be the closest to consensus.
>> Anyone have a different opinion?
>>
>>
>
> "choose" is a verb whereas "widget", "repeater", "field" are nouns.
> Using a noun therefore seems more consistent to me and that would be
> therefore "choice". But I've been also thinking lately about "select" or
> "variant". Naming is a difficult but important subject, as it conveys
> the underlying semantics.
>
I remember also the suggestion to let it be influenced by the
'normal-live-nomenclature' of the typical end-user....
current naming seems to have a C-programming heritage,
'choice' was suggested for it's schema/dtd ring to it
'select' would probably yield the wrong association in HTML infected heads?
'variant' doesn't say a thing to me, but that might be a plus :-)
>>> 1/ Is it a container? Looking at the wiki, the "valued expression
>>> selects case" version has no id. Also, are each fd:case also
>>> containers? My opinion is that fd:when should be a container, but not
>>> fd:case. This is enforced by the reuse of widgets between cases.
>>>
>>
>>
>> Choose and when would both be *implemented* as containers, but
>> they would not affect the paths/namespaces of the widgets they
>> "contain". Think of it as a control structure rather than as
>> a real container "widget". Also the id on the "choose" should
>> be optional.
>>
>>
>
> IMO, the choice widget is "something", i.e. a structural unit like other
> widgets, whereas the various alternatives are more variants of what's in
> that thing. That means that choice would have an id and therefore affect
> the path, but not the cases which define what widgets are children of
> the choice depending on the case value.
>
> Consider the following example (datatypes ommited for brevety) where we
> define the connection setting to a datasource (for example a CVS server):
>
> <fd:choice
> <fd:widgets>
> <fd:field
> <fd:field
> <fd:field
> <fd:field
> <fd:widgets>
> <fd:when
> <fd:widget
> </fd:when>
> <fd:when
> <fd:widget
> <fd:widget
> <fd:widget
> <fd:widget
> </fd:when>
> <fd:choice>
>
> The "datasource" is an entity and threfore should appear in the path,
> whereas "local" and "remote" are just test values. So we have
> "datasource/path" (always) and "datasource/login", "datasource/server"
> etc (when case is "remote").
>
I agree, surely it would feel awkward to have 'remote' (a value!) in the
name-path
>> For example, this would allow the model to choose between two
>> widgets with the same id but with different datatypes without
>> having to modify the corresponding template to recognize that
>> a choice is even being made. In this example there is no need
>> for "choose" to have an id, because the choice does not need
>> to be referenced.
>>
>
> Sorry, but I find having the same id for different widgets depending on
> the choice condition very confusing. IMO, an id should designate one
> thing only, without ambiguity.
>
+1 being infected by java and xml I favour (verbose) explicity over
terseness (and magic)
> The choice condition should define _which_ widgets (and therefore ids)
> are available, not _how_ these widgets are defined. Also, it's very
> likely that a choice in the definition also leads to a choice in the
> view and in the binding.
>
>> For a "choose" that picks between different
>> sets of widgets, or whenever you want the template or binding
>> to be able to react to the selected choice, then the "choose"
>> control structure will need an id.
>>
>>
>
> And if it's got an id, it's no more a control structure, but a widget,
> hence the naming with a noun rather than a verb as outlined above.
>
>>> 2/ Widgets for a case: do we allow inline widgets to be defined in a
>>> fd:case, or only have references with fd:ref? Allowing both may cause
>>> some naming problems (this is also related to the previous question
>>> about containment), as an inline widget's name may conflict with a
>>> widget defined in fd:when. Similarily, if fd:case is not a container,
>>> widgets having the same name in different fd:cases will conflict.
>>>
>>
>>
>> Allow widget definitions in the "choose" for cherry-picking
>> in the "when"'s (refered to as fd:case's above,) and also
>> allow widget definitions in the "when"'s. This allows for
>> the type of example I described above.
>>
>>
>
> ... which I found confusing :-)
>
same here,
feels a bit like we're mixin' in the concern to reduce typing?
IMHO that aspect should be handled by the registry and new/class stuff?
> IMO, inline widget definitions in the "when" can be considered as
> shortcuts for defining a widget in the choice and then referencing it
> when that widget only applies to one particular case, i.e. :
>
> <fd:choice
> <fd:when
> <fd:field
> </fd:when>
> </fd:choice>
>
> Should be strictly equivalent to writing :
>
> <fd:choice
> <fd:widgets>
> <fd:field
> </fd:widgets>
> <fd:when
> <fd:widget
> </fd:when>
> </fd:choice>
>
> That also means that child ids must be unique throughout the various cases.
>
> WDYT?
>
I like it
-marc=
--
Marc Portier
Outerthought - Open Source, Java & XML Competence Support Center
Read my weblog at
mpo@outerthought.org mpo@apache.org | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200411.mbox/%3C41935E51.7060508@outerthought.org%3E | CC-MAIN-2016-30 | refinedweb | 1,481 | 57 |
Kaufen Sie die neuesten Raspberry Pi Boards, den Raspberry Pi 4 und vieles mehr. Jahrelange Erfahrung und bester Kundenservice zeichnen Ihr Kauferlebnis bei Berrybase aus Super Angebote für Raspi 4 1 hier im Preisvergleich. Große Auswahl an Raspi 4 1
.py module, I was sure it was embedded into the adafruit_blinka library but it didn't install it. I've tried all the common command, what can I do? I'm working with Raspberry Pi 2b+ with raspian. Thank you in advance. rpiMike Posts: 1732 Joined: Fri Aug 10, 2012 12:38 pm Location: Cumbria, UK. Re: Install busio module. Wed Mar 06, 2019 12:14 pm . What happens if. This is a top view of the pinouts on the Raspberry Pi Pico. The pin labels are on the bottom of the board. See the Instead, use the busio module to create your bus and then specify the specific pins you want to use. To do so, use the pinout diagram above to find available pins, for example I2C0_SDA is on GP0 (as well as other locations). You then use the board.GPx pin name when creating. In this page we'll assume you've already gotten your Raspberry Pi up and running and can log into the command line. Here's the quick-start for people with some experience: Download the latest Raspberry Pi OS or Raspberry Pi OS Lite to your computer; Burn the OS image to your MicroSD card using your computer; Re-plug the SD card into your computer (don't use your Pi yet!) and set up your wifi. [ The Raspberry Pi can do a lot of things that are not possible on the Arduino, but there is one popular feature that is available on the Arduino that the Raspberry Pi does not have, that is analog inputs. However, we can add this capability to the Raspberry Pi by interfacing an external analog-to-digital converter (ADC) chip. This can cause problems with any devices that expect a constant clock rate during communication. To solve this issue the VPU core frequency must be set.
`busio` - Bus protocol support like I2C and SPI ===== See `CircuitPython:busio` in CircuitPython for more details. * Author(s): cefn try: import threading: except ImportError: threading = None: import adafruit_platformdetect. constants. boards as ap_board: import adafruit_platformdetect. constants. chips as ap_chip: from adafruit_blinka. Connect the following pins with jumper wires (RasPi to bme280): 3V to VIN 3V to CS GND to GND SCL to SCK SDA to SDI If your not sure, here is the pinout for most RasPis. Reading the BME280 . Let's verify that we have everything wired and set up correctly by attempting a read. Try out the following code: # Import necessary modules import board, busio, adafruit_bme280 i2c = busio. I2C (board. As of now, I can't find any other tutorials explaining how to set up multiple I2C busses on the Raspberry Pi. I auch gleich fertig als Platine gekauft. Der Sensor liefert neben dem Luftdruck ebenfalls die Temperatur. Jumper Kabel habe ich hier noch zusätzlich bestellt da im Paket keine enthalten waren Um an den Raspberry Pi einen Luftdruck Sensor anzuschließen habe ich mich für den Sensor BME280 entschieden und diesen auch gleich fertig als Platine gekauft. Der Sensor liefert neben dem Luftdruck ebenfalls die Temperatur und Luftfeuchtigkeit. Jumper Kabel habe ich hier noch zusätzlich bestellt da im Paket keine enthalten waren. Ebenfalls muss man leider löten können, da das Gegenstück.
While the MU Editor Website also has a download for the Raspberry Pi, there is a much simpler installation method for users of the Raspberry Pi Operating System. On the Raspberry Pi Desktop click the Raspberry in the top left corner and choose Preferences. From the sub-menu that appears choose Recommended Software Raspberry Pi Pico With I2C Oled Display and CircuitPython: This is my first experience using this little board from Raspberry Pi Foundation.I preferred to install CircuitPython on the board, but I came across the lack of usage examples (because the board was just released, obviously). Many of the examples
Niedrige Preise, Riesen-Auswahl. Kostenlose Lieferung möglic To setup an I2C bus, you specify the SCL and SDA pins being used. You can look for SCL and SDA in the pin names in the pinout diagram above. So, I use the code: import board import busio i2c = busio.I2C (scl=board.GP5, sda=board.GP4) # This RPi Pico way to call I2C<br> So using these references on a Raspberry Pi, QT Py RP2040 and Raspberry Pi Pico all use the default I2C pins. i2c = busio.I2C (board.SCL, board.SDA) mpr121 = adafruit_mpr121.MPR121 (i2c) 5. Create. bus.
This is the official documentation for the Raspberry Pi, written by the Raspberry Pi Foundation with community contributions. Setup / Quickstart Getting started with your Raspberry Pi, including what you need and how to get it boote Most microcontrollers like Arduino and small board computers like Raspberry Pi do sense information from their surroundings through analog or digital input on their General-Purpose Input and Output Pins (GPIO Pins). For example on a microcontroller that has analog input, an analog input pin can detect voltages from 0 - 5V and this voltage will be mapped to a number ranging from 0 - 1023 depending on the amount of voltages entering the pin therefore this functionality can help. Run the raspi-config tool by typing the following at the terminal prompt, then press return: sudo raspi-config; Press the down arrow on the keyboard until Interfacing Options is highlighted, then press return. Press the down arrow until I2C is highlighted, then press return Raspberry Pi 4 Computer - $60.95 , $55.00 [2GB from Our Store] At the time of writing, the MLX90640 IR camera is extremely over-priced due to high demand (surely due to the COVID-19 crisis), however, I had previously purchased one for about $70 on Amazon. I imagine after some time, the prices will return back to normal. $70 is fairly reasonable for the high resolution of this type of low-cost sensor. For comparison, the popular AMG8833 (an 8x8 pixel IR camera) is roughly $40-$50.
On supported GNU/Linux systems like the Raspberry Pi, you can install the driver locallyfrom PyPI. To install for current user: pip3 install adafruit-circuitpython-max7219 To install system-wide (this may be required in some cases): sudo pip3 install adafruit-circuitpython-max7219 To install in a virtual environment in your current project The 8 pins to the right are what we will connect to our Raspberry Pi: VDD (Pin 16) to 3.3V pin on Raspberry Pi; VREF (Pin 15) to 3.3V pin on Raspberry Pi; AGND (Pin 14) to GND pin on Raspberry Pi; CLK (Pin 13) to Pin 23/SCLK on Raspberry Pi; DOUT (Pin 12) to Pin 21/MISO on Raspberry Pi; DIN (Pin 11) to Pin 19/MOSI on Raspberry Pi; CS (Pin 10) to Pin 24/CE0 on Raspberry P Raspberry Pi Setup Adafruit has an excellent tutorial on Installing CircuitPython Libraries on Raspberry Pi. Quick Start Summary:. Dafür muss man zuvor die Schnittstellen I2C und SPI aktivieren, was am einfachsten mit Raspi-Config (sudo raspi-config) zu erledigen ist. Um den Selbsttest oder andere Skripte zu starten, führt. Der Raspberry Pico wurde hier im Blog bereits ausführlich vorgestellt. Kosten. 14 € für den Sensor 4 € für den Raspberry Pico 1,20 € für die LED-Ampel. Dazu kommen wenige Cent für die Verdrahtung, damit liegt das kleine Projekt bei unter 20 Euro. Verkabelung. Der VCC-Pin des Sensors wird mit 3V3(OUT) des Pico verbunden (physisch Pin 36). Die I2C-Pins des Sensors, also SCL und SDA, werden mit GP0 (SDA) und GP1 (SCL) des Pico verbunden. GND und(!) WAKE des Sensors.
Live stream to showing how to use the PCA9685 PWM & Servo driver with MicroPython to dim LEDs and control servos. Companion to the.. Raspberry Pi's started out as a learning tool and have evolved to an industrial workplace tool. The ease of use and ability to code with Python, the fastest growing programming language, has made them a go to solution. You'll want a Raspberry Pi that has WiFi built in, which are any model 3, 4, and zero W/WH. Between those you can choose based on pricing and features. The Zero W/WH is the. Public; Questions; Tags Users Unanswered Find a Job; Jobs Companies Teams. Stack Overflow for Teams. You'll also want to pick up the following parts from the Adafruit Shop if you do not have them already: 1 x Pi T-Cobbler Plus GPIO Breakout -. For future reference (in case someone else stumbles on this thread): It seems Adafruit made a design mistake by always pulling reset high. Also, their datasheet states RST - this is the Reset pin for the radio Add to Car mit odebeispielen sowohl für Raspberry Pi als auch für Arduino. Dies ermöglicht Ihnen den perfekten Einstieg in die Welt der Sensoren, also auch in die Welt der Programmierung. Dieses Set wird in einem Kunststoff Mehrzweckkoffer geliefert und enthält ein Dish oard aus Acryl, ein readboard, ein Verbindungskabelset, das Sensorkit X40, ein optischer Staubsensor, und ein Luftqualitätssensor. On Twitter, user @Gastonwnc has written a CircuitPython driver for connecting a Raspberry Pi to the SparkFun Qwiic Joystick. The code is very well written to be compatible with the SparkFun hardware and CircuitPython software. Five examples are even provided. Code Example: # import the CircuitPython board and busio libraries import board import busio # Create bus object using the board's I2C. ..
All Places > Raspberry Pi > Blog > 2021 > April > 17. Raspberry Pi Previous post Next post. Program RPi Pico using Arduino-Pico Unfortunately, I still had the compile issue. Then I realized that I also needed to update the Adafruit_BusIO library - and that fixed it. So, here is the working GFX demo - one caveat - my display is at address 0x3C rather than the standard 0x3D. RPi_Pico_ssd1306. import busio: import usb_midi: import adafruit_midi: from adafruit_midi. note_off import NoteOff: from adafruit_midi. note_on import NoteOn: from adafruit_bus_device. i2c_device import I2CDevice: import adafruit_dotstar: from digitalio import DigitalInOut, Direction, Pull # RGB MIDI controller example for Pimoroni RGB Keypad for Raspberry Pi. A detailed tutorial on using Raspberry Pi GPIO pins. This article uses the built-in RPi.GPIO Python library to create scripts for blinking a LED and using a button as input. This is the first part of a series of articles for the Raspberry Pi GPIO pin usage A miminal Raspberry Pi Python example import time import board import busio from anyleaf import PhSensor, CalPt, OnBoard def main (): i2c = busio.I2C(board.SCL, board.SDA) delay = 1 # Time between measurements, in seconds phSensor = PhSensor(i2c, delay) phSensor.calibrate_all( CalPt( 0. , 7. , 25.. Adafruit 16x2 Character LCD + Keypad for Raspberry Pi Moderators: adafruit_support_bill, adafruit 43 posts • Page 1 of 3 • 1, 2, 3. Please be positive and constructive with your questions and comments. HELP! Adafruit 16x2 Character LCD + Keypad for Raspberry Pi . by jrprinter on Fri Jan 18, 2019 8:27 am . I was given this kit as a gift and have spent the past four hours putting it together.
import time from w1thermsensor import W1ThermSensor import board import busio i2c = busio.I2C(board.SCL, board.SDA) import adafruit_ads1x15.ads1015 as ADS from adafruit_ads1x15.analog_in import AnalogIn import RPi.GPIO as GPIO ds18b20 = W1ThermSensor() ads = ADS.ADS1015(i2c) ads.gain = 1 waterTick = 0 #Used to count the number of times the flow input is triggered zone1 = 6 #Zone 1 run duration. Here are links to two ways of setting up a Raspberry Pi (select Raspbian for the OS): Setting up your Raspberry Pi with a USB keyboard, mouse and HDMI capable monitor or TV; Don't have a keyboard, mouse or keyboard? No problem. You can also set up your Raspberry Pi by with your laptop, a micro SD card and the Raspberry Pi Imager ; If you are new to Raspberry Pi, check out this tutorial.
Green: I2C SCL White: I2C SDA Red: power (3.5 V) Black: ground On Welcome to Raspberry Pi , choose Next . Choose your country, language, timezone, and keyboard layout. Choose Next . Enter a password for your Raspberry Pi, and then choos CircuitPython 6.2.0 Beta 1 Released, with Support for Raspberry Pi RP2040! @adafruit @circuitpython #RP2040 #RaspberryPiPico. This is the second beta release of CircuitPython 6.2.0. This release, 6.2.0-beta.1, contains fixes and improvements, most notably for RP2040, ESP32-S2, and Spresense. See Port status below for details on port stability for each port, and Known issues for known problems. MR-BUSIO-SSRAC Gravitech Tochterkarten und OEM-Boards BUSIO SSRAC BOARD Datenblatt, Bestand und Preis En este pequeño tutorial veremos cómo conectar una pantalla / display LCD 16×02 con controlador I2C a una tarjeta Raspberry Pi, también encontrarás un código de programación para tus primeras pruebas. Las pantallas LCD son dispositivos digitales de salida, es decir su funcionamiento consiste en recibir instrucciones correspondientes al texto que se desea mostrar y mostrarlo
That's how CircuitPython is structured as well. busio not only spi transmission / receive part and busdevice processor chip select battery as well. Linux, on the other go, does not allow you to send data to SPI without a CS line, and CS lines are fixed in hardware as well. For example on the Raspberry Pi, only two CS legs are available for hardware SPI legs - CE0 and CE1 - and you have to use. How To Solve ModuleNotFoundError: No module named in Python. The name of the module is incorrect. The Library Module not installe Raspberry Pi Setup. OK now you have all your parts in order, it's time to get your Raspberry Pi computer set up with the HAT or Bonnet. Step 1 - Burn SD Card. Use Etcher or the Raspberry Pi Imager to burn the latest Raspbian Lite to an SD card (you can use full, but we won't be using the desktop software and it takes up a bunch of room
. Con alimentazione a 5V: Quando riceve il segnale di suonare, suona bene. Ma quando riceve il segnale di stare zitto si sente un brusio permanente. In realtà è sufficiente toccare il filo signal con le ditta per produrre il brusio. In pratica inutilizzabile. Bu I have a number of microcontrollers now that use 3.3V logic: Raspberry Pi Pico Adafruit Feather Adafruit Circuit Playground Express BBC Micro:bit So far, I've look at MIDI OUT functionality, so now its time to look at MIDI IN, based on the circuit from my Simple MIDI Monitor. This project shows how to build you
raspi-config Tool via Terminal. Like the SPI peripheral, I2C is not turned on by default. Again, we can use raspi-config to enable it. Run sudo raspi-config. Use the down arrow to select 5 Interfacing Options; Arrow down to P5 I2C. Select yes when it asks you to enable I2C; Also select yes if it asks about automatically loading the kernel module Entwicklungstools anzeigen 5'' 800x480 TFT Raspberry Pi DSI Touchscreen(Compatible with Raspberry Pi 3B/3B+) Mehr erfahren. 518 Auf Lager: 1: 35,23 € Kaufen. Min.: 1 Mult.: 1. Display Development Tools: Development Boards: Entwicklungstools für Temperatur-Sensoren Gravity: I2C Non-contact IR Temperature Sensor (MLX90614-DCI) Vergrößern Herst. Teilenr. SEN0263. Mouser-Teilenr. 426-SEN0263. Grün: I2C-SCL Weiß: I2C-SDA Rot: Stromversorgung (3,5 V) Schwarz: Erdung Wählen Sie unter Welcome to Raspberry Pi (Willkommen bei Raspberry Pi) , Next (Weiter) . Wählen Sie Ihr Land, Ihre Sprache, die Zeitzone und das Tastaturlayout. Wählen Si Unser AZDelivery ⭐⭐⭐⭐⭐ GY-21 ist ein zuverlässiges und langlebiges Modul zur Ermittlung von Temperatur- und Luftfeuchtigkeitswerte. Dank I2C-Schnittstelle ist der Sensor mit den meisten 3.3V und 5V Mikrocontrollern wie Arduino, ESP und Raspberry kompatibel und die Programmierung ist komfortabel umzusetzten. Zusätzlich stehen freie Librarys zur einfachen Benutzung mit Arduinos und. Raspberry Pi OS •In order make your Raspberry Pi up and running you need to install an Operating System (OS) •The OS for Raspberry Pi is called Raspberry Pi OS (previously known as Raspbian) •Raspberry Pi runs a version of an operating system called Linux(Windows and macOS are other operating systems)
raspberry pi pico python circuitpython This project was created on 02/24/2021 and last updated 2 months ago. Detail CircuitPython snakes its way to the new Raspberry Pi Pico board and RP2040 chip. On January 21st, the Raspberry Pi Foundation launched their first microcontroller-class product: the Raspberry Pi Pico. Priced at just $4, it is built using the RP2040, a brand-new microcontroller chip developed by Raspberry Pi. That's not a typo. All Raspberry Pi boards to date have been single board computers. Raspberry Pi CM4 (Lite, Wireless, 1 GB RAM) From the Crowd Supply Basics project. Something for your Piunora to carry. This CM4101000 variant of the Raspberry Pi Compute Module 4 is a System on Module (SoM) containing an ARM quad-core Cortex-A72 processor, 1 GB RAM, 2.4 and 5 GHz 802.11b/g/n/ac Wi-Fi and Bluetooth 5.0, and supporting power. not installed, install it and rerun the command) sudo apt-get install python3-pip Install the CircuitPython libraries; pip3. KNX, HM, HMLAN, RPi 2 mit Raspbian Jessie, knxd und FHEM, 1w Temperaturmessung mit gpio4, Dämmerungssensor, autom. Rolladensteuerung Rolladensteuerung dieter11 Adafruit Industries, Unique & fun DIY electronics and kits Flexible Adafruit DotStar Matrix 16x16 - 256 RGB LED Pixels : ID 2735 - For advanced DotStar LED fans, we now have a bendable, Flexible 16x16 Dotstar LED Matrix! Control all 256 ultra-bright LEDs using only two microcontroller pins, set each LED as you wish to scroll messages or draw little images
This ESP32-CAM Project covers how to use ESP32-CAM with a TFT display to show the picture captured by the cam. We have covered several times how to use ESP32-CAM in different projects and we have described how to use ESP32-CAM in Machine Learning projects.Even if we can use ESP32-CAM with a Web server to show pictures, in this post we want to cover how to show a picture on a TFT screen (ST7735) This article explains how quickly you can learn to install, remove, update and search software packages using apt-get and apt-cache commands from the command line. This article provides some useful commands that will help you to handle package management in Debian/Ubuntu based systems
In the last part of this tutorial we added our components to the weather board and connected everything up. Now that we have everything ready for power, we can work on the software / coding side of this project. In part two of this tutorial we will boot up the Raspberry Pi, do some initial configuration of the Raspbian operating system, install Python libraries for a few of the sensors, and. | https://pazientesosem.com/automatische-bewaesserung-raspberry-pi/y5z1s410uddyfg | CC-MAIN-2022-05 | refinedweb | 3,135 | 63.09 |
Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.
WebSphere Security on the Distributed Platforms 335 You can also add a console group to give specific administrative role(s) to a group of users. To do this from the Administrative Console: 1. 2. 3. 4. 5. Log into the Administrative Console running on the Deployment Manager. Expand the System Administration in the left-hand navigation tree and click on Console Groups. The right-hand pane will show all the defined console groups. In the right-hand pane, click on the button Add. Type in the name of the group or select a special subject and the roles of the group. The two special subjects (groups) are EVERYONE and ALL-AUTHENTICATED. Click on OK button and save the change. Tip For WebSphere on z/OS, the console user can be controlled either by WebSphere or by the z/OS security product. See chapter 14, Security on WebSphere on z/OS, for details. Naming and Security When WebSphere global security is enabled, access to the WebSphere name space is protected. WebSphere naming security defines four roles for accessing the WebSphere namespace: 1. Cos Naming Read. This role allows a user to read the namespace. | http://my.safaribooksonline.com/book/web-development/websphere/0131855875/websphere-security-on-the-distributed-platforms/ch13lev1sec6?reader= | CC-MAIN-2013-20 | refinedweb | 210 | 58.69 |
Hi Adel, I've got your Solaris fixes and my (minor) changes up on
Advertising
I'd like Andrew to take a look (he's back tomorrow), unless he has anything to add I'll commit them. Cheers, Alan. On Mon, 2016-10-17 at 20:03 +0000, Adel Boutros wrote: > Hello Alan, > > No, it isn't a necessary fix (change from T*). > > However, we compiled this code on linux using GCC 4.9.1 and we didn't > get any error on the master. I wonder which version of gcc you are > using which revealed this error? > > Regards, > Adel > > Ps: The original issue here is that CC doesn't handle constructors > with "..." correctly which is here the case of sfinae::wildcard(...) > > Get Outlook for Android > > > > On Mon, Oct 17, 2016 at 9:48 PM +0200, "Alan Conway" <aconway@redhat. > com> wrote: > > Hi Adel, > > I've put a workaround for one GCC problem on > on > way/qpid-proton/tree/absolaris, otherwise I'm happy with this. I'll > wait for Andrew's OK and then commit if he's happy. > > One question: this makes GCC choke: > > index b1aff89..5285e4b 100644 > --- a/proton-c/bindings/cpp/include/proton/codec/encoder.hpp > +++ b/proton-c/bindings/cpp/include/proton/codec/encoder.hpp > @@ -175,14 +175,14 @@ namespace is_encodable_impl { // Protected > the > world from wildcard operator<< > > using namespace internal; > > -sfinae::no operator<<(sfinae::wildcard, sfinae::wildcard); // > Fallback > +sfinae::no operator<<(encoder const&, const sfinae::any_t &); // > Fallback > > template<typename T> struct is_encodable : public sfinae { > - static yes test(encoder); > + static yes test(encoder&); > static no test(...); // Failed test, no match. > - static encoder* e; > - static const T* t; > - static bool const value = sizeof(test(*e << *t)) == sizeof(yes); > + static encoder& e; > + static const T& t; > ^^^^^^^THIS LINE^^^^^^^ > + static bool const value = sizeof(test(e << t)) == sizeof(yes); > }; > > Is the change from T* to T& required to compile on Solaris? If not I > would probably revert it, but if it is needed then the workaround > seems > to be OK. Windows & clang compilers don't have any problem so I > suspect > this is GCC's fault. > > Cheers, > Alan. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org | https://www.mail-archive.com/dev@qpid.apache.org/msg55811.html | CC-MAIN-2016-44 | refinedweb | 368 | 55.74 |
?
I took a look at the laser. It is probably the LD TEC (DTEC) failure.
As the temperature of the LD (DTMP) gradually deviated from 25degCish,
the DTEC voltage also went up from 2Vish to 2.1, 2.2...
When DTEC reaches 3V, it stopped lasing. This cools the diode a bit, and
it start lasing but repeat the above process.
I am not sure which of the head and controller has the issue.
The situation did not improve much by reducing the pumping current (ADJ: -15).
BTW, Turning on/off the noise eater did not change the situation.
I think the head/controller set should be sent out to JDSU and find how they will say.
I turned the laser back on around 1am. This is still happening, although right now it is turning off more often than before, maybe every 15 seconds or so. I am going to turn off the laser for the night.
The measured laser temperature is about 45C (I have a 25,000 count offset in the Y ALS Slow control right now....higher offset, lower temp), although the measured laser temp drops to ~43.5C when the power goes down.
Jamie and I discovered a problem with Matlab/Simulink earlier today.
In the end suspension models, there is a subblock (with top_names) for ALS stuff. Inside there, we use a library part called "ALS_END". When the model was created, it included the part ...../userapps/release/isc/c1/models/ALS_END.mdl . However, if you open up the c1scy diagram and look in the ALS block for this part, you see the part that is in ..../userapps/release/isc/common/models/ALS_END.mdl . Note the difference - the one we want is in the c1 directory, while the one that was created (by Jamie) for the LHO One Arm Test is in the common directory.
If you compile the c1scy model, the RCG is using the correct library part, so the information regarding which part we want is still in there.
However, if you delete the ALS_END part from the model, put the correct one in, save, close, then reopen the model, it once again displays the wrong model. The right click "go to library part" option brings you to the library part that is displayed, which is currently the wrong one. THIS IS BAD, since we could start modifying the wrong things. You do get a warning by Matlab about the file being "shadowed", so we should take heed when we see that warning, and make sure we are getting the file we want.
We are currently running Matlab version 7.11.0.584, which is r2010b. Step 1 will be to update Matlab to the latest version, in hopes that this fixes things. We also should change the name of our c1 part, so that it does not have the same name as the one for the sites. This is not a great solution since we can't guarantee that we will never choose the same names as other sites, but it will at least fix this one case. Again, if you see the warning about "shadowed" filenames, pay attention.
This work earlier today had required moving the harmonic separator back closer to its original position, so that the green could get through without clipping. I locked the Xarm (overriding the trigger) and realigned TRX to the PD and camera..
Manasa has done some work to get the Xgreen aligned, so I'll switch to trying to find that beatnote for now.
[Jenne, Manasa].
Jamie and I were doing some locking, and we found that the Yarm green wasn't locking. It would flash, but not really stay locked for more than a few seconds, and sometimes the green light would totally disappear. If the end shutter is open, you can always see some green light on the arm transmission cameras. So if the shutter is open but there is nothing on the camera, that means something is wrong.
I went down to the end, and indeed, sometimes the green light completely disappears from the end table. At those times, the LED on the front of the laser goes off, then it comes back on, and the green light is back. This also corresponds to the POWER display on the lcd on the laser driver going to ~0 (usually it reads ~680mW, but then it goes to ~40mW). The laser stays off for 1-2 seconds, then comes back and stays on for 1-2 minutes, before turning off for a few seconds again.
Koji suggested turning the laser off for an hour or so to see if letting it cool down helps (I just turned it off ~10min ago), otherwise we may have to ship it somewhere for repairs :(
This is happening again to the Yend laser. It's been fine for the afternoon, and I've been playing with the temperature. First I have been making big sweeps, to figure out what offset values do to the actual temperature, and more recently was starting to do a finer sweep. Using the 'max hold' function on the 8591, I have seen the beat appear during my big sweeps. Currently, the laser temperature measurement is at the Yend, and the RF analyzer is here in the control room, so I don't know what temp it was at when the peaks appeared.
Anyhow, while trying to reaquire lock of the TEM00 mode after changing the temperature, I find that it is very difficult (the green seems misaligned in pitch), and every minute or so the light disappears, and I can no longer see the straight-through beam on the camera. I went down to the end, and the same symptoms of LED on the laser head turning off, power out display goes to ~40mW, are happening. I have turned off the laser as was the solution last time, in hopes that that will fix things.
I ran a cable to the GTRX camera. It is now input #2. The videoswitch script input naming is modified to match this: Input 2 used to be "IFOPO", and is now "GTRX". Input 28 used to be "GRNT", and is now "GTRY". Both green trans cameras are available from the video screen.
I pulled the beatbox from the 1X2 rack so that I could try to hack in some output whitening filters. These are shamefully absent because of my mis-manufacturing of the power on the board.
Right now we're just using the MON output. The MON output buffer (U10) is the only chip in the output section that's stuffed:
The power problem is that all the AD829s were drawn with their power lines reversed. We fixed this by flipping the +15 and -15 power planes and not stuffing the differential output drivers (AD8672).
It's possible to hack in some resistors/capacitors around U10 to get us some filtering there. It's also possible to just stuff U9, which is where the whitening is supposed to be, then just jump it's output over to the MON output jack. That might be the cleanest solution, with the least amount of hacking on the board.
I modified the beatbox according to this plan. I stuffed the whitening filter stage (U9) as indicated in the schematic (I left out the C26 compensation cap which, according to the AD829 datasheet, is not actually needed for our application). I also didn't have any 301 ohm resistors so I stuffed R18 with 332 ohm, which I think should be fine.
Instead of messing with the working monitor output that we have in place, I stuffed the J5 SMA connector and wired U9 output to it in a single-ended fashion (ie. I grounded the shield pins of J5 to the board since we're not driving it differentially). I then connected J5 to the I/Q MON outputs on the front panel. If there's a problem we can just rewire those back to the J4 MON outputs and recover exactly where we were last week.
It all checks out: 0 dB of gain at DC, 1 Hz zero, 10 Hz pole, with 20 dB of gain at high frequencies.
I installed it back in the rack, and reconnected X/Y ARM ALS beatnote inputs and the delay lines. The I/Q outputs are now connected directly to the DAQ without going through any SR560s (so we recover four SR560s).
I dedicated my evening to trying to get the Ygreen beatnote (the idea being to then get the Xgreen beatnote).
First up was tweaking up the green alignment. Per Yuta's suggestion, elog 8283, I increased the refl PD gain by 2 clicks (20dB) to keep the lock super stable while improving the alignment. After I finished, I turned it back to its nominal value. I discovered that I need lenses in front of the DC PD (for Ygreen, and I'm sure Xgreen will be the same). The beam is just barely taking up the whole 2mm diode, so beam jitter translates directly to DC power change measured by the diode. I ended up going just by the green transmission camera for the night, and achieved 225uW of Ygreen on the PSL table. This was ~2,000 counts, but some of the beam is always falling off the diode, so my actual counts value should be higher after installing a lens.
I then opened up the PSL green shutter, which is controlled by the button labeled "SPS" on the shutter screen - I will fix the label during some coffee break tomorrow. Using my convenient new PSL green setup, removing the DC PD allows the beam to reflect all the way to the fuse box on the wall, so you can check beam overlap between the PSL green and the arm green at a range of distances. I did this for Ygreen, and overlapped the Ygreen and PSL green.
I checked the situation of the beat cabling, since Jamie has the beatbox out for whitening filter modifications tonight. In order to get some signal into the control room, I connected the output of the BBPD amplifier (mounted on the front of the 1X2 rack) directly to the cable that goes to the control room. (As part of my cleanup, I put all the cables back the way I found them, so that Jamie can hook everything back up like normal when he finishes the beatbox.)
I then started watching the signal on the 8591E analyzer, but didn't magically see a peak (one can always hope....).
I decided that I should put the offset in the Y AUX laser slow servo back to the value that we had been using for a long time, ~29,000 counts. This is where things started going south. After letting that go for a minute or two, I thought to go check the actual temperature of the laser head. The "T+" temperature on the controller read something like 42C, but the voltmeter which reads a voltage proportional to the temp (10C/V) was reading 5.6V. I immediately turned off the offset, but it's going to take a while for it to cool down, so I'll come back in the morning. I want the AUX laser to be something like 34C, so I just have to wait. Ooops.
Still to do (for the short-term FPMI):
* Find Y beatnote.
* Align Xgreen to the arm - it's still off in pitch.
* Align Xgreen and PSL green to be overlapped, hitting the BBPD.
* Find the X beatnote.
* Reinstall the beatbox.
* Use ALS to stabilize both arms' lengths.
* Lock MICH with AS.
* Look at the noise spectrum of AS - is there more noise than we expect (Yuta and Koji saw extra noise last summer), and if so, where does it come from? Yuta calculated (elog 6931) that the noise is much more than expected from just residual arm motion.
* Write a talk.
Both X and Y green are aligned such that the arm beams hit the broadband PD. Also, the 4th port of the combining BS for each arm was used to put a camera and DC PD for each arm. So, ALS-TRX and ALS-TRY are both active right now. The camera currently labeled "GRNT" is the Ygreen transmission. I have a camera installed for Xgreen transmission, but I have not run a cable to the video matrix. For now, to speed things up, I'll just use the GRNT cable and move it back and forth between the cameras.
-
Jamie has informed me of numpy's numpy.savetxt() method, which is exactly what I want for this situation (human-readable text storage of an array). So, I will now be using:
# outfile is the name of the .png graph. data is the array with our desired data.
numpy.savetxt(outfile + '.dat', data)
to save the data. I can later retrieve it with numpy.loadtxt()
The.
- I took the shutter from AS table to use it for the PSL green. It was sitting near MC REFL path unused (elog #8259).
-.
-.
[Jenne, Manasa]
2 colors 2 arms realized!
1. Spot centering:
We spot centered the IR in both arms.
- Use TT1 and TT2 to center in Y arm (I visually center the spots on the ITM and ETM and then use TTs iteratively)
- Use BS-ETM to center in X arm
Spot positions after centering
X arm Y arm
itmx etmx itmy etmy
pitch -0.86 0.37 1.51 0.05
yaw 0.01 -0.1 0.08 0.10.
3. ALS - green alignment
We then moved on to Ygreen. We used the out of vac steering mirrors to center the beam on the 2 irises that are in place on the table, which was a good starting place. After doing that, and tweaking a small amount to overlap the incident and reflected beams on the green steering mirrors, we saw some mode lock. We adjusted the end table steering mirrors until the Ygreen locked on TEM00. We then followed Rana's suggestion of locking the IR to keep the cavity rigid while we optimized the green transmission. Yuta, while adjusting ITMY and ETMY (rather than the out of vac mirrors) had been able to achieve a green transmission for the Yarm of ~2700 counts using the GTRX DC PD that's on the table. We were only able to get ~2200, with brief flashes up to 2500.
After that, we moved on to the X arm. Since there are no irises on the table, we used the shutter as a reference, and the ETM optic itself. Jenne looked through the viewport at the back of the ETM, while Manasa steered mirrors such that we were on the center of the ETM and the shutter. After some tweaking, we saw some higher order modes lock. We had a very hard time getting TEM00 to stay locked for more than ~1 second, even if the IR beam was locked. It looks like we need to translate the beam up in pitch. The leakage of the locked cavity mode is not overlapped with the incident beam or the promptly reflected beam. This indicates that we're pretty far from optimally aligned. Manasa was able to get up to ~2000 counts using the same GTRX PD though (with the Ygreen shutter closed, to avoid confusion). Tomorrow we will get the Xarm resonating green in the 00 mode.
We need to do a little cleanup on the PSL green setup. Yuta installed a shutter (I forget which unused one he took, but it was already connected to the computers.), so we can use it to block the PSL green beam. The idea here is to use the 4th port of the combining beam splitters that are just before each beat PD, and place a PD and camera for each arm. We already have 2 PDs on the table connected to channels, and one camera, so we're almost there. Jenne will work on this tommorrow during the day, so that we can try to get some beat signals and do some handoffs in the evening.
Koji reminded me (again....this is probably the 2nd or 3rd time I've "discovered" this, at least) that the script
..../scripts/MC/WFS/WFS_FilterBank_offsets
exists, and that we should use it sometimes. See his elog 7452 for details.
2
Notes about using this script:
* Only use it after MC has been very well aligned. MC REFL DC should be less than 0.5 when the MC is locked (with the DC value ~4.5 with the MC unlocked, as usual). This is hard to achieve, but important. Also, check the MC spot centering.
* With the WFS servo off, but the MC locked and light on the WFS diodes, run the script..
Steve just told those of us in the control room that the custodian who goes into the IFO room regularly steps on the blue support beams to reach the top of the chambers to clean them. Since we have seen in the past that stepping on the blue tubes can give the tables a bit of a kick, this could help explain some of the drift, particularly if it was mostly coming from TT2. The custodian has promised Steve that he won't step on the blue beams anymore.
This doesn't explain any of the ~1 hour timescale drift that we see in the afternoons/evenings, so that's still mysterious..
[Manasa, Annalisa, Jenne]
The MC wasn't locking on TEM00 this morning, and the WFS kept pulling the MC out of alignment. The MC was realigned, and the WFS spots are back to being roughly centered (all of this only touching the MC sliders), but the WFS keep doing bad things. They're okay, and improve the alignment slightly at first, but as soon as the FM1 integrator comes on, the MC alignment immediately starts going bad, and within a second or so the MC has unlocked.
The WFS are off right now, and we'll keep investigating after LIGOX.).
~20 minutes ago, maybe right around the time the fb's RAID died (elog 8274) the mode cleaner started behaving weirdly again. The reflected value is very high, even with the WFS on. Earlier this evening, I saw that with the WFS off, the MC reflection was high, but the WFS brought it back down to ~0.7 or 0.8. But now it's ~1.3. With the WFS off, the reflected value is ~1.1. I don't really understand.
Also, the PMC has been drifting in alignment in pitch all day, but a lot more later in the day. The PMC trans is 0.800 right now, but it was as high as 0.825 today, and spent most of the day in the high 0.81xxx range today.
I would provide plots, but as mentioned in elog 8274, we can't get data right now.
[Manasa, Jenne].
Quick Note on Multiprocessing: The multiprocessing was plugged into the codebase on March 4. Since then, the various pages that appear when you click on certain tabs (such as the page found here: from clicking the 'IFO' tab) don't display graphs. But, the graphs are being generated (if you click here or here, you will find the two graphs that are supposed to be displayed). So, for some reason, the multiprocessing is preventing these graphs from appearing, even though they are being generated. I rolled back the multiprocessing changes temporarily, so that the newly generated pages look correct until I find the cause of this.
Fixing Plot Limits: The plots generated by the summary_pages.py script have a few problems, one of which is: the graphs don't choose their boundaries in a very useful way. For example, in these pressure plots, the dropout 0 values 'ruin' the graph in the sense that they cause the plot to be scaled from 0 to 760, instead of a more useful range like 740 to 760 (which would allow us to see details better).
The call to the plotting functions begins in process_data() of summary_pages.py, around line 972, with a call to plot_data(). This function takes in a data list (which represents the x-y data values, as well as a few other fields such as axes labels). The easiest way to fix the plots would be to "cleanse" the data list before calling plot_data(). In doing so, we would remove dropout values and obtain a more meaningful plot.
To observe the data list that is passed to plot_data(), I added the following code:
# outfile is a string that represents the name of the .png file that will be generated by the code.
print_verbose("Saving data into a file.")
print_verbose(outfile)
outfile_mch = open(outfile + '.dat', 'w')
# at this point in process_data(), data is an array that should contain the desired data values.
if (data == []):
print_verbose("Empty data!")
print >> outfile_mch, data
outfile_mch.close()
When I ran this in the code midday, it gave a human-readable array of values that appeared to match the plots of pressure (i.e. values between 740 and 760, with a few dropout 0 values). However, when I let the code run overnight, instead of observing a nice list in 'outfile.dat', I observed:
[('Pressure', array([ 1.04667840e+09, 1.04667846e+09, 1.04667852e+09, ...,
1.04674284e+09, 1.04674290e+09, 1.04674296e+09]), masked_array(data = [ 744.11076965 744.14254761 744.14889221 ..., 742.01931356 742.05930208
742.03433228],
mask = False,
fill_value = 1e+20)
)]
I.e. there was an ellipsis (...) instead of actual data, for some reason. Python does this when printing lists in a few specific situations. The most common of which is that the list is recursively defined. For example:
INPUT:
a = [5]
a.append(a)
print a
OUTPUT:
[5, [...]]
It doesn't seem possible that the definitions for the data array become recursive (especially since the test worked midday). Perhaps the list becomes too long, and python doesn't want to print it all because of some setting.
Instead, I will use cPickle to save the data. The disadvantage is that the output is not human readable. But cPickle is very simple to use. I added the lines:
import cPickle
cPickle.dump(data, open(outfile + 'pickle.dat', 'w'))
This should save the 'data' array into a file, from which it can be later retrieved by cPickle.load().
There are other modules I can use that will produce human-readable output, but I'll stick with cPickle for now since it's well supported. Once I verify this works, I will be able to do two things:
1) Cut out the dropout data values to make better plots.
2) When the process_data() function is run in its current form, it reprocesses all the data every time. Instead, I will be able to draw the existing data out of the cPickle file I create. So, I can load the existing data, and only add new values. This will help the program run faster.
This is my interpretation of where Steve is proposing to place the seismometers (he wrote ITMX southwest, but I'm pretty sure from the photo he means southeast).
I think his point is that these locations are on the less-used side of the beam tube, so they will not be in the way. Also, they are not underneath the tube, so we will not have any problems putting the covers on/taking them off..
Granite base 20" x 20" x 5" locations are on the CES side of our IFO arms: as shown ETMY_ south-west, ETMX_north-east, ITMX_south-east . No height limitation. This side of the tube has no traffic.
SS cover McMaster# 41815T4 (H) SS container cov.
How to calculate the accumulated round-trip Gouy phase (and namely the transverse mode spacing) of a general cavity
only from the round-trip ABCD matrix
T1300189
I'm working on getting the input beam centered on the Yarm optics. To do this, I measured the spot positions, move the tip tilts, realign the cavity, then measure the new spot positions. While doing this, I am also moving the BS and Xarm optics to keep the Xarm aligned, so that I don't have to do hard beam-finding later.
Here is the plot of spot measurements today. The last measurement was taken with no moving, or realigning, just several hours later after speaking with our Indian visitors. I'm closer than I was, but there is more work to do.
Re: POY beam reduction.
We are able to lock the Yarm with the beam / gain as it is. I had thought we might need to increase the DC gain in the whitening board by a factor of 2, but so far it's fine. | http://nodus.ligo.caltech.edu:8080/40m/?id=8259 | CC-MAIN-2022-21 | refinedweb | 4,146 | 80.01 |
{-# LANGUAGE CPP #-} #if defined(__GLASGOW_HASKELL__) && (__GLASGOW_HASKELL__ >= 702) {-# LANGUAGE Trustworthy #-} #endif {-# LANGUAGE DeriveDataTypeable #-} module Data.IterIO.Inum (-- * Base types Inum, Onum -- * Concatenation and fusing operators , (|$), (.|$), cat, lcat, (|.), (.|) -- * Exception functions , inumCatch, inumFinally, inumOnException , resumeI, verboseResumeI -- * Simple enumerator construction function -- $mkInumIntro , ResidHandler, CtlHandler , mkInumC, mkInum, mkInumP , inumBracket -- * Utilities , pullupResid , noCtl, passCtl, consCtl, mkCtl, mkFlushCtl , runIterM, runIterMC, runInum -- * Some basic Inums , inumNop, inumNull, inumPure, enumPure, inumRepeat , inumTee -- * Enumerator construction from Codecs , Codec, runCodec, runCodecC -- * Enumerator construction monad -- $mkInumMIntro , InumM, mkInumM, mkInumAutoM , setCtlHandler, setAutoEOF, setAutoDone , addCleanup, withCleanup , ifeed, ifeed1, ipipe, irun, irepeat, ipopresid, idone ) where import Prelude hiding (null) import Control.Exception (Exception(..)) import Control.Monad import Control.Monad.Trans import Data.Maybe import Data.Monoid import Data.Typeable import System.Environment (getProgName) import System.IO import Data.IterIO.Iter import Data.IterIO.Trans -- -- Enumerator types -- -- | The type of an /iterator-enumerator/, which transcodes data from -- some input type @tIn@ to some output type @tOut@. An @Inum@ acts -- as an 'Iter' when consuming data, then acts as an enumerator when -- feeding transcoded data to another 'Iter'. -- -- At a high level, one can think of an @Inum@ as a function from -- 'Iter's to 'IterR's, where an @Inum@'s input and output types are -- different. A simpler-seeming alternative to @Inum@ might have -- been: -- -- > type Inum' tIn tOut m a = Iter tOut m a -> Iter tIn m a -- -- In fact, given an @Inum@ object @inum@, it is possible to construct -- a function of type @Inum'@ with @(inum '.|')@. But sometimes one -- might like to concatenate @Inum@s. For instance, consider a -- network protocol that changes encryption or compression modes -- midstream. Transcoding is done by @Inum@s. To change transcoding -- methods after applying an @Inum@ to an iteratee requires the -- ability to \"pop\" the iteratee back out of the @Inum@ so as to be -- able to hand it to another @Inum@. @Inum@'s return type (@Iter tIn -- m (IterR tOut m a)@ as opposed to @Iter tIn m a@) allows the -- monadic bind operator '>>=' to accomplish this popping in -- conjunction with the 'tryRI' and 'reRunIter' functions. -- -- All @Inum@s must obey the following two rules. -- -- 1. /An/ @Inum@ /may never feed a chunk with the EOF flag set to/ -- /it's target/ 'Iter'. Instead, upon receiving EOF, the @Inum@ -- should simply return the state of the inner 'Iter' (this is how -- \"popping\" the iteratee back out works--If the @Inum@ passed -- the EOF through to the 'Iter', the 'Iter' would stop requesting -- more input and could not be handed off to a new @Inum@). -- -- 2. /An/ @Inum@ /must always return the state of its target/ 'Iter'. -- This is true even when the @Inum@ fails, and is why the 'Fail' -- state contains a @'Maybe' a@ field. -- -- In addition to returning when it receives an EOF or fails, an -- @Inum@ should return when the target 'Iter' returns a result or -- fails. An @Inum@ may also unilaterally return the state of the -- iteratee at any earlier point, for instance if it has reached some -- logical message boundary (e.g., many protocols finish processing -- headers upon reading a blank line). -- -- @Inum@s are generally constructed with one of the 'mkInum' or -- 'mkInumM' functions, which hide most of the error handling details -- and ensure the above rules are obeyed. Most @Inum@s are -- polymorphic in the last type, @a@, in order to work with iteratees -- returning any type. There isn't much reason for an @Inum@ to care -- about the type @a@. Had this module used the Rank2Types Haskell -- extension, it would define @Inum@ as: -- -- > type Inum tIn tOut m = forall a. Iter tOut m a -- > -> Iter tIn m (IterR tOut m a) type Inum tIn tOut m a = Iter tOut m a -> Iter tIn m (IterR tOut m a) -- | An @Onum t m a@ is just an 'Inum' in which the input is -- @()@--i.e., @'Inum' () t m a@--so that there is no meaningful input -- data to transcode. Such an enumerator is called an -- /outer enumerator/, because it must produce the data it feeds to -- 'Iter's by either executing actions in monad @m@, or from its own -- internal pure state (as for 'enumPure'). -- -- As with 'Inum's, an @Onum@ should under no circumstances ever feed -- a chunk with the EOF bit set to its 'Iter' argument. When the -- @Onum@ runs out of data, it must simply return the current state of -- the 'Iter'. This way more data from another source can still be -- fed to the iteratee, as happens when enumerators are concatenated -- with the 'cat' function. -- -- @Onum@s should generally be constructed using the 'mkInum' or -- 'mkInumM' function, just like 'Inum's, the only difference being -- that for an @Onum@ the input type is @()@, so executing 'Iter's to -- consume input will be of little use. type Onum t m a = Inum () t m a -- Concatenation and fusing functions -- | Run an 'Onum' on an 'Iter'. This is the main way of actually -- executing IO with 'Iter's. @|$@ is a type-restricted version of -- the following code, in which @inum@ must be an 'Onum': -- -- @ -- inum |$>= 'ungetI' . (: []) loop where loop = do t <- 'dataI' -- AutoEOF flag will handle IterEOF err 'ifeed' $ L.concat t -- AutoDone flag will catch True result loop @ The 'addCleanup' function registers actions that should always be executed when the 'Inum' finishes. Here we use it to place residual data from the target 'Iter' back into the `Inum`'s input stream. Finally, there is a function 'irepeat' that automatically sets the /AutoEOF/ and /AutoDone/ flags and then loops forever on an 'InumM' computation. Using 'irepeat' to simplify further, we have: @ 'inumConcat' = 'mkInumM' $ 'withCleanup' ('ipopresid' >>= 'ungetI' . (: [])) $ 'irepeat' $ 'dataI' >>= 'ifeed' . L.concat @ 'withCleanup', demonstrated here, is a variant of 'addCleanup' that cleans up after a particular action, rather than at the end of the `Inum`'s whole execution. (At the outermost level, as used here, `withCleanup`'s effects are identical to `addCleanup`'s.) In addition to 'ifeed', the 'ipipe' function invokes a different 'Inum' from within the 'InumM' monad, piping its output directly to the target 'Iter'. As an example, consider an 'Inum' that processes a mail message and appends a signature line, implemented as follows: @ inumAddSig :: (Monad m) => 'Inum' L.ByteString L.ByteString m a> -- 'setAutoDone' 'True'@ as the first thing inside 'mkInumM'.) mkInumAutoM :: (ChunkData tIn, ChunkData tOut, Monad m) => InumM tIn tOut m a b -> Inum tIn tOut m a mkInumAutoM inumm iter0 = runInumM inumm defaultInumState { insIter = IterF iter0 , insAutoEOF = True , insAutoDone = True } -- | Build an 'Inum' out of an 'InumM' computation. If you run -- 'mkInumM' inside the @'Iter' tIn m@ monad (i.e., to create an -- enumerator of type @'Inum' tIn tOut m a@), then the 'InumM' -- computation will be in a Monad of type @'Iter' t tm@ where @tm@ is -- a transformed version of @m@. This has the following two -- consequences: -- -- - If you wish to execute actions in monad @m@ from within your -- 'InumM' computation, you will have to apply @'lift'@ twice (as -- in @'lift' $ 'lift' action_in_m@) rather than just once. -- -- - If you need to execute actions in the @'Iter' t m@ monad, you -- will have to lift them with the 'liftI' function. -- -- The 'InumM' computation you construct can feed output of type -- @tOut@ to the target 'Iter' (which is implicitly contained in the -- monad state), using the 'ifeed', 'ipipe', and 'irun' functions. mkInumM :: (ChunkData tIn, ChunkData tOut, Monad m) => InumM tIn tOut m a b -> Inum tIn tOut m a mkInumM inumm iter0 = runInumM inumm defaultInumState { insIter = IterF iter0 } -- | Used from within the 'InumM' monad to feed data to the target -- 'Iter'. Returns @'False'@ if the target 'Iter' is still active and -- @'True'@ if the iter has finished and the 'Inum' should also -- return. (If the @autoDone@ flag is @'True'@, then @ifeed@, -- @ipipe@, and @irun@ will never actually return @'True'@, but -- instead just immediately run cleanup functions and exit the -- 'Inum' when the target 'Iter' stops being active.) ifeed :: (ChunkData tIn, ChunkData tOut, Monad m) => tOut -> InumM tIn tOut m a Bool ifeed = ipipe . inumPure -- | A variant of 'ifeed' that throws an exception of type 'IterEOF' -- if the data being fed is 'null'. Convenient when reading input -- with a function (such as "Data.ListLike"'s @hget@) that returns 0 -- bytes instead of throwing an EOF exception to indicate end of file. -- For instance, the main loop of @'enumFile'@ could be implemented -- as: -- -- @ -- 'irepeat' $ 'liftIO' ('LL.hGet' h 'defaultChunkSize') >>= 'ifeed1' -- @ ifeed1 :: (ChunkData tIn, ChunkData tOut, Monad m) => tOut -> InumM tIn tOut m a Bool ifeed1 dat = if null dat then throwEOFI "ifeed1" else ifeed dat -- | Apply another 'Inum' to the target 'Iter' from within the 'InumM' -- monad. As with 'ifeed', returns @'True'@ when the 'Iter' is -- finished. -- -- Note that the applied 'Inum' must handle all control requests. (In -- other words, ones it passes on are not caught by whatever handler -- is installed by 'setCtlHandler', but if the 'Inum' returns the -- 'IterR' in the 'IterC' state, as 'inumPure' does, then requests -- will be handled.) ipipe :: (ChunkData tIn, ChunkData tOut, Monad m) => Inum tIn tOut m a -> InumM tIn tOut m a Bool ipipe inum = do s <- iget r <- tryRI (liftI (inum $ reRunIter $ insIter s)) >>= getIter >>= liftI . runIterRMC (insCtl s) iput s { insIter = r } let done = not $ isIterActive r if done && insAutoDone s then idone else return done where getIter (Right i) = return i getIter (Left r@(Fail _ (Just i) _)) = do imodify $ \s -> s { insIter = i } reRunIter r getIter (Left r) = reRunIter r -- | Apply an 'Onum' (or 'Inum' of an arbitrary, unused input type) to -- the 'Iter' from within the 'InumM' monad. As with 'ifeed', returns -- @'True'@ when the 'Iter' is finished. irun :: (ChunkData tAny, ChunkData tIn, ChunkData tOut, Monad m) => Inum tAny tOut m a -> InumM tIn tOut m a Bool irun onum = ipipe $ runI . onum -- | Repeats an action until the 'Iter' is done or an EOF error is -- thrown. (Also stops if a different kind of exception is thrown, in -- which case the exception propagates further and may cause the -- 'Inum' to fail.) @irepeat@ sets both the /AutoEOF/ and -- /AutoDone/ flags to @'True'@. irepeat :: (ChunkData tIn, Monad m) => InumM tIn tOut m a b -> InumM tIn tOut m a () irepeat action = do imodify $ \s -> s { insAutoEOF = True, insAutoDone = True } let loop = action >> loop in loop -- | If the target 'Iter' being fed by the 'Inum' is no longer active -- (i.e., if it is in the 'Done' state or in an error state), this -- funciton pops the residual data out of the 'Iter' and returns it. -- If the target is in any other state, returns 'mempty'. ipopresid :: (ChunkData tIn, ChunkData tOut, Monad m) => InumM tIn tOut m a tOut ipopresid = do s <- iget case insIter s of r | isIterActive r -> return mempty | otherwise -> do let (Chunk t _) = getResid r iput s { insIter = setResid r mempty } return t -- | Immediately perform a successful exit from an 'InumM' monad, -- terminating the 'Inum' and returning the current state of the -- target 'Iter'. Can be used to end an 'irepeat' loop. (Use -- @'throwI' ...@ for an unsuccessful exit.) idone :: (ChunkData tIn, Monad m) => InumM tIn tOut m a b idone = setAutoEOF True >> throwEOFI "idone" -- | An 'Inum' that acts like 'inumNop', except that before passing -- data on, it feeds a copy to a \"tee\" 'Iter' (by analogy with the -- Unix @tee@ utility), which may, for instance, transform and log the -- data. -- -- The tee `Iter`'s return value is ignored. If the tee 'Iter' -- returns before an EOF is received and before the target 'Iter' has -- finished processing input, then @inumTee@ will continue to pass -- data to the target 'Iter'. However, if the tee 'Iter' fails, then -- this will cause @inumTee@ to fail immediately. -- -- As an example, one could implement something close to -- @'inumStderr'@ (from "Data.IterIO.ListLike") as follows: -- -- > inumStderr = inumTee $ handleI stderr -- -- (Except note that the real @'inumStderr'@ does not close its file -- descriptor, while the above implementation will send an EOF to -- @'handleI'@, causing @stderr@ to be closed.) inumTee :: (ChunkData t, Monad m) => Iter t m b -> Inum t t m a inumTee tee0 iter0 = runInumM (chunk0I >>= loop tee0) nopInumState { insIter = IterF iter0 } where chunk0I = Iter $ \c@(Chunk _ eof) -> Done c (Chunk mempty eof) loop tee c = liftI (runIterMC (passCtl pullupResid) tee c) >>= feed c feed (Chunk d False) (IterF tee) = do done <- ifeed d `onExceptionI` liftI (runI tee) if done then liftI (runI tee) >> return () else chunkI >>= loop tee feed (Chunk d True) (IterF _) = ifeed d >> return () feed _ (Fail r _ c) = reRunIter $ Fail r Nothing c feed (Chunk d eof) (Done _ _) = do done <- ifeed d unless (done || eof) $ ipipe inumNop >> return () feed _ _ = error "inumTee" | http://hackage.haskell.org/package/iterIO-0.2.2/docs/src/Data-IterIO-Inum.html | CC-MAIN-2014-35 | refinedweb | 2,083 | 65.76 |
A Developer's Notebook
Download source code from ProjectDistributor.net
Figure 1. Sample form with custom drawn border.
I have split the code into two primary classes. First one, FormWithNonClientArea, extends the standard Form class by adding support for non-client area messages (more on this shortly) and can be uses for various scenarios. Second one, CustomBorderForm, utilizes these messages and represents a base class for drawing graphical borders composed of several bitmap parts. It also draws a form header including icon, text and form buttons (more on this later). This way I can separate dirty plumbing required for enabling non-client drawing and actual drawing of the graphical elements. So lets see how it works.
Each window we see on screen (be it a Form, UserControl or any other Control) is described by two rectangles: the bounds of the window and it's client area. Bounds specify the location and size of the window as a whole while the client area specifies the region inside the window that is available for client controls. By default Windows Forms allows us to access only the client part of the window. To gain access to the non-client part we need to intercept some additional windows messages. We can do this by overriding the WndProc message loop. For each message I defined dedicated method, so my WndProc method only redirects calls to this methods.
If we are going to draw our custom borders good chances are that their size and proportions will differ from the standard ones. To correct this we need to specify a new client rectangle for the window. This is done in the WM_NCCALCSIZE message. This message can be raised in two ways. When WParam is equal to zero the LParam points to RECT structure with window bound that we should adjust to proposed client ractangle. Alternatively, when WParam value is one the LParam points to NCCALCSIZE_PARAMS strucure allowing to move the existing client area inside the window. For our purpose we will simply adjust the proposed rectangle to required coordinates.
private void WmNCCalcSize(ref Message m)
{
if (m.WParam == NativeMethods.FALSE)
{
NativeMethods.RECT ncRect = (NativeMethods.RECT)m.GetLParam(typeof(NativeMethods.RECT));
Rectangle proposed = ncRect.Rect;
OnNonClientAreaCalcSize(ref proposed);
ncRect = NativeMethods.RECT.FromRectangle(proposed);
Marshal.StructureToPtr(ncRect, m.LParam, false);
m.Result = IntPtr.Zero;
}
else if (m.WParam == NativeMethods.TRUE)
{
NativeMethods.NCCALCSIZE_PARAMS ncParams =
(NativeMethods.NCCALCSIZE_PARAMS)m.GetLParam(typeof(NativeMethods.NCCALCSIZE_PARAMS));
Rectangle proposed = ncParams.rectProposed.Rect;
OnNonClientAreaCalcSize(ref proposed);
ncParams.rectProposed = NativeMethods.RECT.FromRectangle(proposed);
Marshal.StructureToPtr(ncParams, m.LParam, false);
}
}
Note that this method calls a virtual OnNonClientAreaCalcSize method taking a Rectangle that you can overwrite in your code.
The main message responsible for painting the non-client area is the WM_NCPAINT message. The WParam for this message contains the handle to a clip region or 1 if entire window should be repainted. So to paint anything we only need to create a Graphics object from the window handle and use it as we would in the typical OnPaint method.
private void WmNCPaint(ref Message msg)
{
PaintNonClientArea(msg.HWnd, (IntPtr)msg.WParam);
msg.Result = NativeMethods.TRUE;
}
Now is the tricky part; If you leave it that way you quickly notice that on some occasions you still get some parts of the standard border painted over your brand new framing. That indicates that there are some other messages that cause painting in the non-client area.
The first one is the WM_SETTEXT message that transports new title for the window (stored as Text property on the Form). Apparently it also repaints the border in order to update the title bar. Of course, we still want to send out the new title so we need to pass the message to the DefWndProc method. But we will handle painting on our own.
private void WmSetText(ref Message msg)
{
DefWndProc(ref msg);
PaintNonClientArea(msg.HWnd, (IntPtr)1);
}
The second culprit happens to be the WM_ACTIVATE message that is responsible for switching the window active state. Window is active when it is the top level window that you interact with and it has different border to show that. When you switch to another window the first one updates its border to indicate that it has lost the focus. The WParam of this messages holds the window active state and is 1 when border should be drawn as active and zero otherwise. We will handle the painting and skip to the DefWndProc only when window is minimized.
private void WmNCActivate(ref Message msg)
{
bool active = (msg.WParam == NativeMethods.TRUE);
if (this.WindowState == FormWindowState.Minimized)
DefWndProc(ref msg);
else
{
PaintNonClientArea(msg.HWnd, (IntPtr)1);
msg.Result = NativeMethods.TRUE;
}
}
I agree that this is big design inconsequence and all painting should be done in one place but it is around for a long time and we must live with it. Now that we cleared this out we can get down to actual painting.
The most important thing here is to get the correct hDC handle and we wil use native GetDCEx function for that. It takes three parameters: the window handle, the clip region and option. First two we got already from the messages. As for the options the MSDN states that only WINDOW and INTERSECTRGN are needed, but other sources confirm that CACHE is required on Win9x and you need CLIPSIBLINGS to prevent painting on overlapping windows.
If we get a valid hDC we can quickly create the Graphics object using Graphics.FromHdc() method, paint our stuff and dispose it. Here it is worth noting that when we dispose a Graphics instance it will also automatically free the hDC so there is no need for calling the ReleaseDC manually.
private void PaintNonClientArea(IntPtr hWnd, IntPtr hRgn)
{
NativeMethods.RECT windowRect = new NativeMethods.RECT();
if (NativeMethods.GetWindowRect(hWnd, ref windowRect) == 0)
return;
Rectangle bounds = new Rectangle(0, 0,
windowRect.right - windowRect.left,
windowRect.bottom - windowRect.top);
if (bounds.Width == 0 || bounds.Height == 0)
return;
Region clipRegion = null;
if (hRgn != (IntPtr)1)
clipRegion = System.Drawing.Region.FromHrgn(hRgn);));
if (hDC == IntPtr.Zero)
return;
using (Graphics g = Graphics.FromHdc(hDC))
{
OnNonClientAreaPaint(new NonClientPaintEventArgs(g, bounds, clipRegion));
}
}
At the begining ot this method I use native GetWindowRect function to get the correct coordinates of the window. At this point the Bounds property is not accurate and especially during resizing seems to always stay behind. Next I validate window size as obviously no painting is needed when it is empty. The actual painting should be done in the virtual OnNonClientAreaPaint method.
Unfortunatelly painting this way is fine only as long as you don't try to resize the window. When you do you will see very unpleasant flickering. Totally not cool. We need to apply double-buffering in order to fix it and I just found a cool mechanism in .NET Framework that should help with that.
There is a class called a BufferedGraphics buried in the System.Drawing namespace. It's the same class that is used when you set DoubleBuffered flag on any control. (To be honest I haven't checked if this class existed prior to .NET 2.0). There is also a factory class called BufferedGraphicsManager that we use to create such object. The Allocate method takes either an existing Graphics object or the targetDC handle. Having an instance of BufferedGraphics we obtain a real Graphics object, do the painting as usual, and then call the Render method to draw the buffered image to the screen (presumably using some form of bit blitting).
using (BufferedGraphics bg = BufferedGraphicsManager.Current.Allocate(hDC, bounds))
{
Graphics g = bg.Graphics;
OnNonClientAreaPaint(new NonClientPaintEventArgs(g, bounds, clipRegion));
bg.Render();
}
Whew, the above code looks to simple to possibly work. And indeed it doesn't. It all looks good when the window stays active, but when it gets covered by another window suddenly all of the client area gets painted in black. So there is something missing, like establishing a clip region to exclude this area from bliting. I hope that someone smarter then me could help and figure out a better way to fix this.
There are two more things that need to be done in order to get perfectly drawn custom border. First thing is to completely get rid of XP themes on our window. We have already taken over all painting but when themes are turned on they also might affect other aspects of window. For example thay would likely change the window shape to something non-rectangular (like adding round corners) and obviously we want to prevent this. We will use the SetWindowTheme native function with empty parameters to completely disable theming on the current window. Note however that this will only affect the window itself so you don't need to worry that you loose theming on the control placed in it's content area.
As for the second thing, I wonder how many of you heard about windows ghosting feature? I didn't know about it until recently. Quoting MSDN "Window ghosting is a Windows Manager feature that lets the user minimize, move, or close the main window of an application that is not responding." Basically when the process doesn't respond to window messages within designeted time (hangs) the Windows Manager will finally loose patience and draw the window frame by itself allowing the user to do something with the application. This can hapen when the process executes some long running task (like query or complex processing) in the same thread as the windows message loop.
In theory for a well written application that delegates all heavy processing to background workers this should never happen. But I haven't written such application yet. This feature can be disabled using DisableWindowGhosting native function but only for the entire application. Now it's your decision whether you want to present the users with consistent user experience even on these odd occasions or you can cope with some occasional quirks but let the user control the situation all the time.
protected override void OnHandleCreated(EventArgs e)
{
NativeMethods.SetWindowTheme(this.Handle, "", "");
NativeMethods.DisableProcessWindowsGhosting();
base.OnHandleCreated(e);
}
After we have positioned and painted the border it's time to make it behave like the normal one does. One such behavior is indicating whether the form is sizable when mouse moves on the border. Another is that when we double-click on the title bar the windows maximizes or restores respectively. And of course the window recognizes when mouse is over or user clicks on the form buttons (minimize, maximize, close, and help). To make this work properly we need to tell the border how all these elements are positioned on our form. This is done in the WM_NCHITTEST message. The LParam holds the screen coordinates of current mouse position. As a result we should return a hit-test code telling the system what part of window the mouse is over.
private void WmNCHitTest(ref Message m)
{
Point clientPoint = this.PointToWindow(new Point(msg.LParam.ToInt32()));
m.Result = new System.IntPtr(OnNonClientAreaHitTest(clientPoint));
}
When mouse is on the border edge and the window is resizable we should return values like HTLEFT, HTTOPRIGHT or HTBOTTOM . When mosue is over one of the window buttons we return code for that buton (HTMINBUTTON, HTMAXBUTTON, HTCLOSE). To indicate that mouse hovers over the title bar we can pass the HTCAPTION value. Finally when mouse is inside the client rectangle we should return the HTCLIENT value.
Capturing mouse move is quite simple. The WM_NCMOUSEMOVE message delivers new mouse position each time it is moved over the non client area. Here, I am using the standard MouseMoveEventArgs to pass this to the virtual method.
private void WmNCMouseMove(ref Message msg)
{
Point clientPoint = this.PointToWindow(new Point(msg.LParam.ToInt32()));
OnNonClientMouseMove(new MouseEventArgs(MouseButtons.None, 0,
clientPoint.X, clientPoint.Y, 0));
msg.Result = IntPtr.Zero;
}
To capture mouse click we should intercept the WM_NCLBUTTONDOWN and WM_NCLBUTTONUP messages, for the left mouse button (and similar for the other two). For all these messages the WParam contains the hit-test value that we returned when processing the WM_NCHITTEST message, and the LParam contains the screen coordinates of the mouse cursor. I'm using extended NonClientMouseEventArgs to pass all this information to the virtual method. In return the method should set the Handled flag to indicate that our application processed this message.
private void WmNCLButtonDown(ref Message msg)
{
Point pt = this.PointToWindow(new Point(msg.LParam.ToInt32()));
NonClientMouseEventArgs args = new NonClientMouseEventArgs(
MouseButtons.Left, 1, pt.X, pt.Y, 0, msg.WParam.ToInt32());
OnNonClientMouseDown(args);
if (!args.Handled)
{
DefWndProc(ref msg);
}
msg.Result = NativeMethods.TRUE;
}
Continued in part two, where you learn how to actually construct a border from bitmap parts and how to handle title bar buttons.
Skin design by Mark Wagner, Adapted by David Vidmar | http://geekswithblogs.net/kobush/articles/CustomBorderForms.aspx | CC-MAIN-2015-18 | refinedweb | 2,122 | 57.57 |
> GMapViewer-src.zip > ResourcePool.java
package org.sreid.j2me.util; /** * Manages shared resources, making sure that each resource is never used by * more than one client at any given time. */ public class ResourcePool { private final Resource[] pool; private boolean traceEnabled = false; /** * Constructs a ResourcePool to manage the specified resources. */ // NOTE: byte[][] is less general, but Dave Morehouse reports that Object[] fails verification on some devices public ResourcePool(byte[][] resources) { pool = new Resource[resources.length]; for (int i = 0; i < resources.length; i++) { pool[i] = new Resource(resources[i]); } } /** * Calls claimResource, catching InterruptedException. */ public Object claimResourceIgnoreInterrupt() { for (;;) { try { return claimResource(); } catch (InterruptedException e) { e.printStackTrace(); } } } /** * Claims a resource, waiting until one is available if necessary. You MUST * pass the returned resource to releaseResource when you're done with it * or it will never be made available again. Use try/finally to ensure that * claimed resources are released. */ public synchronized Object claimResource() throws InterruptedException { for (;;) { for (int i = 0; i < pool.length; i++) { Resource r = pool[i]; if (!r.claimed) { r.claimed = true; if (traceEnabled) r.trace = new Throwable(); return r.resource; } } // No resources available. wait(); } } /** * Releases a previously claimed resource. */ public synchronized void releaseResource(Object resource) { for (int i = 0; i < pool.length; i++) { Resource r = pool[i]; if (r.resource == resource) { if (!r.claimed) throw new IllegalArgumentException("Tried to release an unclaimed resource: " + resource); r.claimed = false; r.trace = null; notify(); return; } } throw new IllegalArgumentException("Not a known resource: " + resource); } /** * Returns true if claimResource would return without blocking. Be sure to * synchronize on the ResourcePool instance between calling * isResourceAvaialble and claimResource, or another thread might claim the * resource between the two calls. */ public synchronized boolean isResourceAvailable() { for (int i = 0; i < pool.length; i++) { if (!pool[i].claimed) return true; } return false; } /** * Dumps stack traces of when each claimed resource was claimed. This can * be used for debugging purposes, to locate the source of a resource leak. * Only works if setTraceEnabled(true) was previously called. */ public synchronized void dumpTrace() { System.err.println("ResourcePool resource claimant trace:"); for (int i = 0; i < pool.length; i++) { Resource r = pool[i]; if (!r.claimed) System.err.println("(unclaimed)"); else if (r.trace == null) System.err.println("(no trace available)"); else r.trace.printStackTrace(); } } /** * Enables or disables resource claim tracing, as required for dumpTrace. * Be aware that enabling tracing may have performance implications. */ public synchronized void setTraceEnabled(boolean traceEnabled) { this.traceEnabled = traceEnabled; } private static final class Resource { final Object resource; boolean claimed = false; Throwable trace = null; // stack trace from when the resource was claimed Resource(Object r) { this.resource = r; } } } | http://read.pudn.com/downloads126/sourcecode/java/533249/org/sreid/j2me/util/ResourcePool.java__.htm | crawl-002 | refinedweb | 426 | 53.07 |
Debugging ASP.NET AJAX Applications
Dan Wahlin
The ability to debug code is a skill that every developer should have in their arsenal regardless of the technology they're using. It goes without saying that understanding the different debugging options that are available can save a tremendous amount of time on a project and perhaps even a few headaches. While many developers are accustomed to using Visual Studio .NET or Web Developer Express to debug ASP.NET applications that use VB.NET or C# code, some aren't aware that it's also extremely useful for debugging client-side code such as JavaScript. The same type of techniques used to debug .NET applications can also be applied to AJAX-enabled applications and more specifically ASP.NET AJAX applications.
In this article you'll see how Visual Studio 2008 and several other tools can be used to debug ASP.NET AJAX applications to quickly locate bugs and other issues. This discussion will include information about enabling Internet Explorer 6 or higher for debugging, using Visual Studio 2008 and the Script Explorer to step through code as well as using other free tools such as Web Development Helper. You'll also learn how to debug ASP.NET AJAX applications in Firefox using an extension named Firebug which lets you step through JavaScript code directly in the browser without any other tools. Finally, you'll be introduced to classes in the ASP.NET AJAX Library that can help with various debugging tasks such as tracing and code assertion statements.
Before you try to debug pages viewed in Internet Explorer there are a few basic steps you'll need to perform to enable it for debugging. Let's take a look at some basic setup requirements that need to be performed to get started.
Configuring Internet Explorer for Debugging
Most people aren't interested in seeing JavaScript issues encountered on a Website viewed with Internet Explorer. In fact, the average user wouldn't even know what to do if they saw an error message. As a result, debugging options are turned off by default in the browser. However, it's very straightforward to turn debugging on and put it to use as you develop new AJAX applications.
To enable debugging functionality, go to Tools Internet Options on the Internet Explorer menu and select the Advanced tab. Within the Browsing section ensure that the following items are unchecked:
- Disable script debugging (Internet Explorer)
- Disable script debugging (Other)
Although not required, if you're trying to debug an application you'll probably want any JavaScript errors in the page to be immediately visible and obvious. You can force all errors to be shown with a message box by checking the "Display a notification about every script error" checkbox. While this is a great option to turn on while you're developing an application, it can quickly become annoying if you're just perusing other Websites since your chances of encountering JavaScript errors are pretty good.
Figure 1 shows what the Internet Explorer advanced dialog should look after it has been properly configured for debugging.
Figure 1: Configuring Internet Explorer for debugging. (Click to view full-size image)
Once debugging has been turned on, you'll see a new menu item appear in the View menu named Script Debugger. It has two options available including Open and Break at Next Statement. When Open is selected you'll be prompted to debug the page in Visual Studio 2008 (note that Visual Web Developer Express can also be used for debugging). If Visual Studio .NET is currently running you can choose to use that instance or to create a new instance. When Break at Next Statement is selected you'll be prompted to debug the page when JavaScript code is executed. If JavaScript code executes in the onLoad event of the page you can refresh the page to trigger a debug session. If JavaScript code is run after a button is clicked then the debugger will run immediately after the button is clicked.
Note: if you are running on Windows Vista with User Access Control (UAC) enabled, and you have Visual Studio 2008 set to run as an administrator, Visual Studio will fail to attach to the process when you are prompted to attach. To work around this issue, start Visual Studio first, and use that instance to debug.
Although the next section will demonstrate how to debug an ASP.NET AJAX page directly from within Visual Studio 2008, using the Internet Explorer Script Debugger option is useful when a page is already open and you'd like to more fully inspect it.
Debugging with Visual Studio 2008
Visual Studio 2008 provides debugging functionality that developers around the world rely on everyday to debug .NET applications. The built-in debugger allows you to step through code, view object data, watch for specific variables, monitor the call stack plus much more. In addition to debugging VB.NET or C# code, the debugger is also helpful for debugging ASP.NET AJAX applications and will allow you to step through JavaScript code line by line. The details that follow focus on techniques that can be used to debug client-side script files rather than providing a discourse on the overall process of debugging applications using Visual Studio 2008.
The process of debugging a page in Visual Studio 2008 can be started in several different ways. First, you can use the Internet Explorer Script Debugger option mentioned in the previous section. This works well when a page is already loaded in the browser and you'd like to start debugging it. Alternatively, you can right-click on an .aspx page in the Solution Explorer and select Set As Start Page from the menu. If you're accustomed to debugging ASP.NET pages then you've probably done this before. Once F5 is pressed the page can be debugged. However, while you can generally set a breakpoint anywhere you'd like in VB.NET or C# code, that's not always the case with JavaScript as you'll see next.
Embedded Versus External Scripts
The Visual Studio 2008 debugger treats JavaScript embedded in a page different than
external JavaScript files. With external script files, you can open the file and
set a breakpoint on any line you choose. Breakpoints can be set by clicking in the
grey tray area to the left of the code editor window. When JavaScript is embedded
directly into a page using the
<script> tag, setting a breakpoint by clicking
in the grey tray area isn't an option. Attempts to set a breakpoint on a line
of embedded script will result in a warning that states "This is not a valid
location for a breakpoint".
You can get around this issue by moving the code into an external .js file and referencing it using the src attribute of the <script> tag:
<script type="text/javascript" src="Scripts/YourScript.js"></script>
What if moving the code into an external file isn't an option or requires more work than it's worth? While you can't set a breakpoint using the editor, you can add the debugger statement directly into the code where you'd like to start debugging. You can also use the Sys.Debug class available in the ASP.NET AJAX library to force debugging to start. You'll learn more about the Sys.Debug class later in this article.
An example of using the
debugger keyword is shown in Listing 1. This example forces
the debugger to break right before a call to an update function is made.
Listing 1. Using the debugger keyword to force the Visual Studio .NET debugger to break.
function BuildPerson() { var person = { FirstName: $get("txtFirstName").value, LastName: $get("txtLastName").value, Address: { Street: $get("txtStreet").value, City: $get("txtCity").value, State: $get("txtState").value } }; debugger; UpdatePerson(person); }
Once the debugger statement is hit you will be prompted to debug the page using Visual Studio .NET and can begin stepping through the code. While doing this you may encounter an issue with accessing ASP.NET AJAX library script files used in the page so let's take a look at using Visual Studio .NET's Script Explorer.
Using Visual Studio .NET Windows to Debug
Once a debug session is started and you begin walking through code using the default F11 key, you may encounter the error dialog shown in see Figure 2 unless all script files used in the page are open and available for debugging.
Figure 2: Error dialog shown when no source code is available for debugging. (Click to view full-size image)
This dialog is shown because Visual Studio .NET isn't sure how to get to the source code of some of the scripts referenced by the page. While this can be quite frustrating at first, there's a simple fix. Once you have started a debug session and hit a breakpoint, go to the Debug Windows Script Explorer window on the Visual Studio 2008 menu or use the Ctrl+Alt+N hotkey.
Note: If you can't see the Script Explorer menu listed, go to Tools Customize Commands on the Visual Studio .NET menu. Locate the Debug entry in the Categories section and click it to show all available menu entries. In the Commands list, scroll down to Script Explorer and then drag it up onto the Debug Windows menu in mentioned earlier. Doing this will make the Script Explorer menu entry available each time you run Visual Studio .NET.
The Script Explorer can be used to view all scripts used in a page and open them in the code editor. Once the Script Explorer is open, double-click on the .aspx page currently being debugged to open it in the code editor window. Perform the same action for all of the other scripts shown in the Script Explorer. Once all of the scripts are open in the code window you can press F11 (and use the other debug hotkeys) to step through your code. Figure 3 shows an example of the Script Explorer. It lists the current file being debugged (Demo.aspx) as well as two custom scripts and two scripts dynamically injected into the page by the ASP.NET AJAX ScriptManager.
Figure 3. The Script Explorer provides easy access to scripts used in a page. (Click to view full-size image)
Several others windows can also be used to provide useful information as you step through code in a page. For example, you can use the Locals window to see the values of different variables used in the page, the Immediate window to evaluate specific variables or conditions and view the output. You can also use the Output window to view trace statements written out using the Sys.Debug.trace function (which will be covered later in this article) or Internet Explorer's Debug.writeln function.
As you step through code using the debugger you can mouse over variables in the code to view the value that they are assigned. However, the script debugger occasionally won't show anything as you mouse over a given JavaScript variable. To see the value, highlight the statement or variable you're trying to see in the code editor window and then mouse over it. Although this technique doesn't work in every situation, many times you will be able to see the value without having to look in a different debug window such as the Locals window.
A video tutorial demonstrating some of the features discussed here can be viewed at.
Debugging With Web Development Helper
Although Visual Studio 2008 (and Visual Web Developer Express 2008) are very capable debugging tools, there are additional options that can be used as well which are more light-weight. One of the latest tools to be released is the Web Development Helper. Microsoft's Nikhil Kothari (one of the key ASP.NET AJAX architects at Microsoft) wrote this excellent tool which can perform many different tasks from simple debugging to viewing HTTP request and response messages. Web Development Helper can be downloaded at.
Web Development helper can be used directly inside of Internet Explorer which makes it convenient to use. It's started by selecting Tools Web Development Helper from the Internet Explorer menu. This will open the tool in the bottom portion of the browser which is nice since you don't have to leave the browser to perform several tasks such as HTTP request and response message logging. Figure 4 shows what Web Development Helper looks like in action.
Figure 4: Web Development Helper (Click to view full-size image)
Web Development helper isn't a tool you'll use to step through code line by line as with Visual Studio 2008. However, it can be used to view trace output, easily evaluate variables in a script or explore data is inside of a JSON object. It's also very useful for viewing data that is passed to and from an ASP.NET AJAX page and a server.
Once Web Development Helper is open in Internet Explorer, script debugging must be enabled by selecting Script Enable Script Debugging from the Web Development helper menu as shown earlier in Figure 4. This enables the tool to intercept errors that occur as a page is run. It also allows easy access to trace messages that are output in the page. To view trace information or execute script commands to test different functions within a page, select Script Show Script Console from the Web Development Helper menu. This provides access to a command window and a simple immediate window.
Viewing Trace Messages and JSON Object Data
The immediate window can be used to execute script commands or even load or save scripts that are used to test different functions in a page. The command window displays trace or debug messages written out by the page being viewed. Listing 2 shows how to write a trace message using Internet Explorer's Debug.writeln function.
Listing 2. Writing out a client-side trace message using the Debug class.
function BuildPerson() { var person = { FirstName: $get("txtFirstName").value, LastName: $get("txtLastName").value, Address: { Street: $get("txtStreet").value, City: $get("txtCity").value, State: $get("txtState").value } }; Debug.writeln("Person name: " + person.LastName); UpdatePerson(person); }
If the LastName property contains a value of Doe, Web Development Helper will display the message "Person name: Doe" in the script console's command window (assuming that debugging has been enabled). Web Development Helper also adds a top-level debugService object into pages that can be used to write out trace information or view the content of JSON objects. Listing 3 shows an example of using the debugService class's trace function.
Listing 3. Using Web Development Helper's debugService class to write a trace message.
function BuildPerson() { var person = { FirstName: $get("txtFirstName").value, LastName: $get("txtLastName").value, Address: { Street: $get("txtStreet").value, City: $get("txtCity").value, State: $get("txtState").value } }; if (window.debugService) { window.debugService.trace("Person name: " + person.LastName); } UpdatePerson(person); }
A nice feature of the debugService class is that it will work even if debugging isn't enabled in Internet Explorer making it easy to always access trace data when Web Development Helper is running. When the tool isn't being used to debug a page, trace statements will be ignored since the call to window.debugService will return false.
The debugService class also allows JSON object data to be viewed using Web Development Helper's inspector window. Listing 4 creates a simple JSON object containing person data. Once the object is created, a call is made to the debugService class's inspect function to allow the JSON object to be visually inspected.
Listing 4. Using the debugService.inspect function to view JSON object data.
function BuildPerson() { var person = { FirstName: $get("txtFirstName").value, LastName: $get("txtLastName").value, Address: { Street: $get("txtStreet").value, City: $get("txtCity").value, State: $get("txtState").value } }; if (window.debugService) { window.debugService.inspect("Person Object",person); } UpdatePerson(person); }
Calling the GetPerson() function in the page or through the immediate window will result in the Object Inspector dialog window appearing as shown in Figure 5. Properties within the object can be changed dynamically by highlighting them, changing the value shown in the Value text box and then clicking the Update link. Using the Object Inspector makes it straightforward to view JSON object data and experiment with applying different values to properties.
Debugging Errors
In addition to allowing trace data and JSON objects to be displayed, Web Development helper can also aid in debugging errors in a page. If an error is encountered, you will be prompted to continue to the next line of code or debug the script (see Figure 6). The Script Error dialog window shows the complete call stack as well as line numbers so you can easily identify where issues are within a script.
Figure 5: Using the Object Inspector window to view a JSON object. (Click to view full-size image)
Selecting the debug option allows you to execute script statements directly in Web Development Helper's immediate window to view the value of variables, write out JSON objects, plus more. If the same action that triggered the error is performed again and Visual Studio 2008 is available on the machine, you will be prompted to start a debug session so that you can step through the code line by line as discussed in the previous section.
Figure 6: Web Development Helper's Script Error Dialog (Click to view full-size image)
Inspecting Request and Response Messages
While debugging ASP.NET AJAX pages it is often useful to see request and response messages sent between a page and server. Viewing the content within messages allows you to see if the proper data is being passed as well as the size of the messages. Web Development Helper provides an excellent HTTP message logger feature that makes it easy to view data as raw text or in a more readable format.
To view ASP.NET AJAX request and response messages, the HTTP logger must be enabled by selecting HTTP Enable HTTP Logging from the Web Development Helper menu. Once enabled, all messages sent from the current page can be viewed in the HTTP log viewer which can be accessed by selecting HTTP Show HTTP Logs.
Although viewing the raw text sent in each request/response message is certainly useful (and an option in Web Development Helper), it is often easier to view message data in a more graphical format. Once HTTP logging has been enabled and messages have been logged, message data can be viewed by double-clicking on the message in the HTTP log viewer. Doing this allows you to view all headers associated with a message as well as the actual message content. Figure 7 shows an example of a request message and response message viewed in the HTTP Log Viewer window.
Figure 7: Using the HTTP Log Viewer to view request and response message data. (Click to view full-size image)
The HTTP Log Viewer automatically parses JSON objects and displays them using a tree view making it quick and easy to view the object's property data. When an UpdatePanel is being used in an ASP.NET AJAX page, the viewer breaks out each portion of the message into individual parts as shown in Figure 8. This is a great feature that makes it much easier to see and understand what is in the message as compared to viewing the raw message data.
Figure 8: An UpdatePanel response message viewed using the HTTP Log Viewer. (Click to view full-size image)
There are several other tools that can be used to view request and response messages in addition to Web Development Helper. Another good option is Fiddler which is available for free at. Although Fiddler will not be discussed here, it is also a good option when you need to thoroughly inspect message headers and data.
Debugging with Firefox and Firebug
While Internet Explorer is still the most widely used browser, other browsers such as Firefox have become quite popular and are being used more and more. As a result, you'll want to view and debug your ASP.NET AJAX pages in Firefox as well as Internet Explorer to ensure that your applications work properly. Although Firefox can't tie directly into Visual Studio 2008 for debugging, it has an extension called Firebug that can be used to debug pages. Firebug can be downloaded for free by going to.
Firebug provides a full-featured debugging environment that can be used to step through code line by line, access all scripts used within a page, view DOM structures, display CSS styles and even track events that occur in a page. Once installed, Firebug can be accessed by selecting Tools Firebug Open Firebug from the Firefox menu. Like Web Development Helper, Firebug is used directly in the browser although it can also be used as a stand-alone application.
Once Firebug is running, breakpoints can be set on any line of a JavaScript file whether the script is embedded in a page or not. To set a breakpoint, first load the appropriate page you'd like to debug in Firefox. Once the page is loaded, select the script to debug from Firebug's Scripts drop-down list. All scripts used by the page will be shown. A breakpoint is set by clicking in Firebug's grey tray area on the line where the breakpoint should go must like you would do in Visual Studio 2008.
Once a breakpoint has been set in Firebug you can perform the action required to execute the script that needs to be debugged such as clicking a button or refreshing the browser to trigger the onLoad event. Execution will automatically stop on the line containing the breakpoint. Figure 9 shows an example of a breakpoint that has been triggered in Firebug.
Figure 9: Handling breakpoints in Firebug. (Click to view full-size image)
Once a breakpoint is hit you can step into, step over or step out of code using the arrow buttons. As you step through code, script variables are displayed in the right-hand portion of the debugger allowing you to see values and drill-down into objects. Firebug also includes a Call Stack drop-down list to view the script's execution steps that led up to the current line being debugged.
Firebug also includes a console window that can be used to test different script statements, evaluate variables and view trace output. It is accessed by clicking on the Console tab at the top of the Firebug window. The page being debugged can also be "inspected" to see its DOM structure and contents by clicking on the Inspect tab. As you mouse over the different DOM elements shown in the inspector window the appropriate portion of the page will be highlighted making it easy to see where the element is used in the page. Attribute values associated with a given element can be changed "live" to experiment with applying different widths, styles, etc. to an element. This is a nice feature that saves you from having to constantly switch between the source code editor and the Firefox browser to view how simple changes affect a page.
Figure 10 shows an example of using the DOM inspector to locate a textbox named txtCountry in the page. The Firebug inspector can also be used to view CSS styles used in a page as well as events that occur such as tracking mouse movements, button clicks, plus more.
Figure 10: Using Firebug's DOM inspector. (Click to view full-size image)
Firebug provides a light-weight way to quickly debug a page directly in Firefox as well as an excellent tool for inspecting different elements within the page.
Debugging Support in ASP.NET AJAX
The ASP.NET AJAX library includes many different classes that can be used to simplify the process of adding AJAX capabilities into a Webpage. You can use these classes to locate elements within a page and manipulate them, add new controls, call Web Services and even handle events. The ASP.NET AJAX library also contains classes that can be used to enhance the process of debugging pages. In this section you'll be introduced to the Sys.Debug class and see how it can be used in applications.
Using the Sys.Debug class
The Sys.Debug class (a JavaScript class located in the Sys namespace) can be used to perform several different functions including writing trace output, performing code assertions and forcing code to fail so that it can be debugged. It is used extensively in the ASP.NET AJAX library's debug files (installed at C:\Program Files\Microsoft ASP.NET\ASP.NET 2.0 AJAX Extensions\v1.0.61025\MicrosoftAjaxLibrary\System.Web.Extensions\1.0.61025.0 by default) to perform conditional tests (called assertions) that ensure parameters are passed properly to functions, that objects contain the expected data and to write trace statements.
The Sys.Debug class exposes several different functions that can be used to handle tracing, code assertions or failures as shown in Table 1.
Table 1. Sys.Debug class functions.
Client-side tracing can be used in much the same way as the tracing functionality available in ASP.NET. It allows different messages to easily be seen without interrupting the flow of the application. Listing 5 shows an example of using the Sys.Debug.trace function to write to the trace log. This function simply takes the message that should be written out as a parameter.
Listing 5. Using the Sys.Debug.trace function.
function BuildPerson() { var address = new XmlForAsp.Address($get("txtStreet").value, $get("txtCity").value, $get("txtState").value, $get("txtZip").value); var person = new XmlForAsp.Person(null, $get("txtFirstName").value, $get("txtLastName").value, address); Sys.Debug.trace("Person's name: " + person.get_firstName() + " " + person.get_lastName()); UpdatePerson(person); }
If you execute the code shown in Listing 5 you won't see any trace output in the page. The only way to see it is to use a console window available in Visual Studio .NET, Web Development Helper or Firebug. If you do want to see the trace output in the page then you'll need to add a TextArea tag and give it an id of TraceConsole as shown next:
<textArea id="TraceConsole" rows="10" cols="50"></textArea>
Any Sys.Debug.trace statements in the page will be written to the TraceConsole TextArea.
In cases where you want to see the data contained within a JSON object you can use the Sys.Debug class's traceDump function. This function takes two parameters including the object that should be dumped to the trace console and a name that can be used to identify the object in the trace output. Listing 6 shows an example of using the traceDump function.
Listing 6. Using the Sys.Debug.traceDump function.
function UpdatePerson(person) { //Dump contents of the person object to the trace output Sys.Debug.traceDump(person,"Person Data"); alert("Person updated! " + person.get_firstName() + " " + person.get_lastName()); }
Figure 11 shows the output from calling the Sys.Debug.traceDump function. Notice that in addition to writing out the Person object's data, it also writes out the Address sub-object's data.
In addition to tracing, the Sys.Debug class can also be used to perform code assertions. Assertions are used to test that specific conditions are met while an application is running. The debug version of the ASP.NET AJAX library scripts contain several assert statements to test a variety of conditions.
Listing 7 shows an example of using the Sys.Debug.assert function to test a condition. The code tests whether or not the Address object is null before updating a Person object.
Figure 11: Output of the Sys.Debug.traceDump function. (Click to view full-size image)
Listing 7. Using the debug.assert function.
function UpdatePerson(person) { //Check if address is null Sys.Debug.assert(person.get_address() == null,"Address is null!",true); alert("Person updated! " + person.get_firstName() + " " + person.get_lastName()); }
Three parameters are passed including the condition to evaluate, the message to display if the assertion returns false and whether or not information about the caller should be displayed. In cases where an assertion fails, the message will be displayed as well as caller information if the third parameter was true. Figure 12 shows an example of the failure dialog that appears if the assertion shown in Listing 7 fails.
The final function to cover is Sys.Debug.fail. When you want to force code to fail on a particular line in a script you can add a Sys.Debug.fail call rather than the debugger statement typically used in JavaScript applications. The Sys.Debug.fail function accepts a single string parameter that represents the reason for the failure as shown next:
Sys.Debug.fail("My forced failure of script.");
Figure 12: A Sys.Debug.assert failure message. (Click to view full-size image)
When a Sys.Debug.fail statement is encountered while a script is executing, the value of the message parameter will be displayed in the console of a debug application such as Visual Studio 2008 and you'll be prompted to debug the application. One case where this can be quite useful is when you can't set a breakpoint with Visual Studio 2008 on an inline script but would like the code to stop on particular line so you can inspect the value of variables.
Understanding the ScriptManager Control's ScriptMode Property
The ASP.NET AJAX library includes debug and release script versions that are installed at C:\Program Files\Microsoft ASP.NET\ASP.NET 2.0 AJAX Extensions\v1.0.61025\MicrosoftAjaxLibrary\System.Web.Extensions\1.0.61025.0 by default. The debug scripts are nicely formatted, easy to read and have several Sys.Debug.assert calls scattered throughout them while the release scripts have whitespace stripped out and use the Sys.Debug class sparingly to minimize their overall size.
The ScriptManager control added to ASP.NET AJAX pages reads the compilation element's debug attribute in web.config to determine which versions of library scripts to load. However, you can control if debug or release scripts are loaded (library scripts or your own custom scripts) by changing the ScriptMode property. ScriptMode accepts a ScriptMode enumeration whose members include Auto, Debug, Release and Inherit.
ScriptMode defaults to a value of Auto which means that the ScriptManager will check the debug attribute in web.config. When debug is false the ScriptManager will load the release version of ASP.NET AJAX library scripts. When debug is true the debug version of the scripts will be loaded. Changing the ScriptMode property to Release or Debug will force the ScriptManager to load the appropriate scripts regardless of what value the debug attribute has in web.config. Listing 8 shows an example of using the ScriptManager control to load debug scripts from the ASP.NET AJAX library.
Listing 8. Loading debug scripts using the ScriptManager.
<asp:ScriptManager</asp:ScriptManager>
You can also load different versions (debug or release) of your own custom scripts by using the ScriptManager's Scripts property along with the ScriptReference component as shown in Listing 9.
Listing 9. Loading custom scripts using the ScriptManager.
<asp:ScriptManager <Scripts> <asp:ScriptReference </Scripts> </asp:ScriptManager>
Note: If you're loading custom scripts using the ScriptReference component you must notify the ScriptManager when the script has finished loading by adding the following code at the bottom of the script:
if (typeof(Sys) !== 'undefined') Sys.Application.notifyScriptLoaded();
The code shown in Listing 9 tells the ScriptManager to look for a debug version of the Person script so it will automatically look for Person.debug.js instead of Person.js. If the Person.debug.js file is not found an error will be raised.
In cases where you want a debug or release version of a custom script to be loaded based upon the value of the ScriptMode property set on the ScriptManager control, you can set the ScriptReference control's ScriptMode property to Inherit. This will cause the proper version of the custom script to be loaded based upon the ScriptManager's ScriptMode property as shown in Listing 10. Because the ScriptMode property of the ScriptManager control is set to Debug, the Person.debug.js script will be loaded and used in the page.
Listing 10. Inheriting the ScriptMode from the ScriptManager for custom scripts.
<asp:ScriptManager <Scripts> <asp:ScriptReference </Scripts> </asp:ScriptManager>
By using the ScriptMode property appropriately you can more easily debug applications and simplify the overall process. The ASP.NET AJAX library's release scripts are rather difficult to step through and read since code formatting has been removed while the debug scripts are formatted specifically for debugging purposes.
Conclusion
Microsoft's ASP.NET AJAX technology provides a solid foundation for building AJAX-enabled applications that can enhance the end user's overall experience. However, as with any programming technology, bugs and other application issues will certainly arise. Knowing about the different debugging options available can save a lot of time and result in a more stable product.
In this article you've been introduced to several different techniques for debugging ASP.NET AJAX pages including Internet Explorer with Visual Studio 2008, Web Development Helper and Firebug. These tools can simplify the overall debugging process since you can access variable data, walk through code line by line and view trace statements. In addition to the different debugging tools discussed, you also saw how the ASP.NET AJAX library's Sys.Debug class can be used in an application and how the ScriptManager class can be used to load debug or release versions of scripts. | http://www.asp.net/web-forms/tutorials/aspnet-ajax/understanding-asp-net-ajax-debugging-capabilities | CC-MAIN-2014-15 | refinedweb | 5,634 | 55.64 |
Bug #13270
IRB hangs when printing "\e]"
Description
Steps to reproduce:
irb
print "\e]"
- Or:
puts "\e["
- try CMD+C, nothing happens
- try CMD+D, prints "30m"
Expected behavior:
- just prints "30m" (that's what
prydoes)
Ruby versions tried:
- ruby 2.3.3p222 (2016-11-21 revision 56859) [x86_64-darwin16]
- ruby 2.4.0p0 (2016-12-24 revision 57164) [x86_64-darwin16]
History
Updated by shevegen (Robert A. Heiler) over 2 years ago
Is this darwin-specific? It appears to work fine on my linux system here.
ruby 2.4.0p0 (2016-12-24 revision 57164) [i686-linux]
Updated by nobu (Nobuyoshi Nakada) over 2 years ago
- Status changed from Open to Feedback
I can't reproduce it on darwin15.
Does it happen without irb, just
ruby -e print "\e]"?
If only with irb, does it with
irb -f?
Updated by snood1205 (Eli Sadoff) over 2 years ago
- Status changed from Feedback to Open
I can reproduce it on Darwin, so I'm switching it back to open.
My
ruby -v is
ruby 2.4.0p0 (2016-12-24 revision 57164) [x86_64-darwin14]
Also it occurs with
irb -f but not with
ruby -e
Updated by snood1205 (Eli Sadoff) over 2 years ago
Even more information, this is reproducible on ruby -v ruby 2.4.0p0 (2016-12-24 revision 57164) [x86_64-linux], but instead of printing out "30m" after CMD+D it prints out nothing. This seems to be a bug within IRB. Interesting, this behavior seems to be OS defined as well. The following program in C
#include <stdio.h> int main() { puts("\e]"); }
has different outputs based on the OS. On macOS Sierra, it outputs
30m with
gcc,
cc, and
clang, whereas on Fedora 22 it outputs nothing with both
gcc and
cc. I can't exactly figure out what is wrong, but it is quite odd.
Updated by nobu (Nobuyoshi Nakada) over 2 years ago
- Status changed from Open to Feedback
What terminal emulator are you using, the standard
Terminal.app?
Updated by domaio (Dorian M) over 2 years ago
Nobuyoshi Nakada wrote:
What terminal emulator are you using, the standard
Terminal.app?
I'm using iTerm 3.0.14.
And on Terminal.app (v2.7.1 (387)) I get
>> puts "\e]" il >> puts "\e[" nil
(notice the first "n" missing)
Also, why
pry (0.10.4), on iTerm:
> puts "\e[" nil > puts "\e]" 1;36mnil >
I'm on MacOS Sierra 10.12.1 (16B2555)
Updated by znz (Kazuhiro NISHIYAMA) over 2 years ago
I can reproduce
print "\e]" and Ctrl+C, nothing happens. But I can't reproduce using
puts "\e[". And I can't reproduce Ctrl+D
prints "30m". Ctrl+D causes simply exited.
I think iTerm eats output from "\e]" (OSC) to "\a" (BEL) (or "\e" or something else).
pry outputs some
"\e"s after evaluation, then it seems to be without hang.
I typed
print "\e]", Enter, Ctrl+C,
puts "\a" (can't see), Enter. Then it outputs
=> nil and prompt.
% rbenv exec irb -r irb/completion --simple-prompt >> print "\e]" => nil >>
Updated by nobu (Nobuyoshi Nakada) over 2 years ago
- Status changed from Feedback to Rejected
It is not ruby specific, and (probably) expected behavior of some terminal emulators.
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/13270 | CC-MAIN-2019-35 | refinedweb | 542 | 73.98 |
$ cnpm install apptension-tools
What is
apptension-tools?
Apptension-tools contains a set of gulp tasks that you can use in your project. Here you will find each of those briefly explained. Most of them behave a bit differently depending on environment in which they are run.
Launches browserSync server.
It will proxy all the calls to webpack dev server.
It serves files from
dist directory. Useful to check your build.
Deletes
.tmp and
dist directories together with their content.
Compiles handlebars template to produce html file. It supports multiple entry points.
Injects webpack dev server script.
Removes development scripts.
Not applicable.
Copies whole
backend directory to
dist.
Not applicable.
Copies all files that contain
*.production.* in their name
from current working directory to
dist. It removes the
production part.
Copies
app/public directory to output dir.
Output directory is
.tmp.
Output directory is
dist.
It lints all script files in project based on
.eslintrc file.
Copies files from
images directory to output dir.
Output directory is
.tmp.
Output directory is
dist. Also applies imagemin.
Launches karma server in
test environment.
Not applicable.
Appends random hash to styles and scripts filenames in order to
bust browser's cache on subsequent deploys. It generates rev manifest file
that is later used by CompileIndex task. Outputs files to
dist directory.
Not applicable.
Replaces all paths to assets in
.html,
.js and
.css files with those produced by
rev task.
Compiles sass files into css using node-sass.
It is not a part of the webpack build process in order to support sprites generation.
Files are written to
.tmp directory.
Additionally css is minified.
Generates sprites from images located in
app/images/sprites directory. It expects
that retina images are present and are suffixed with
-2x.png.
It also generates sass file that contains variabled and mixins necessary to use
images included in the sprite.
Example of usage:
.icon-example-filename { @include retina-sprite($example-group); }
In above example the task expects those two files to be present in sprites dir:
where former is exactly twice as big. In case it is not, the task will fail.
By default it takes
app/src/main.js as an entry point and produces bundle out of it.
It also can spawn webpack dev server when
watch argument passed to factory function
is
true.
Primarily you should use
npm as a dependency manager.
You can require bower components by using relative path.
It is a predefined alias.
You can put libraries downloaded from other sources into
vendor_modules directory
residing in root.
Those can later be used with:
import vendorExample from 'vendor_modules/vendorExample';
All css files required in javascript will be extracted to
vendor-styles.css which
you can add to your index file template. Remember to use
assetPath helper.
Additionally includes source maps. By default webpack's devtools are set to
eval
as it is the fastest option.
Additionally uglifies the script.
Webpack task uses DefinePlugin
to set
__DEBUG__ variable. It allows you to include specific blocks of code only
in development bundle.
if(__DEBUG__) { // this will be included in development bundle }
if(__DEBUG__) { // this will not be included in production bundle }
Not applicable.
Compresses
dist directory to
dist.zip.
Tasks can be configured through Gulpfile.
var tasks = require('apptension-tools/gulp')({ // insert config options here });
type:
number
Default: '8000'
Port of webpack dev server.
type: 'string'
Default: '0.0.0.0'
Domain of webpack dev server.
type: 'Object'
webpack configuration object. Check webpack's documentation for complete option's list.
type: 'Object'
webpack-dev-server configuration object. Check webpack's documentation' for complete option's list.
type: 'Object'
node-sass configuration object. Check github page for complete option's list.
gulp-karma configuration object. Check github page for complete option's list. | https://npm.taobao.org/package/apptension-tools | CC-MAIN-2018-51 | refinedweb | 627 | 54.49 |
An async GeoJSON client library for GeoNet NZ Quakes feed.
Project description
python-aio-geojson-geonetnz-quakes
This library provides convenient async access to the GeoNet NZ Quakes feed.
Installation
pip install aio-geojson-geonetnz-quakes
Usage
See below for examples of how this library can be used. After instantiating a
particular class - feed or feed manager - and supply the required parameters,
you can call
update to retrieve the feed data. The return value
will be a tuple of a status code and the actual data in the form of a list of
feed entries specific to the selected feed.
Status Codes
- OK: Update went fine and data was retrieved. The library may still return empty data, for example because no entries fulfilled the filter criteria.
- OK_NO_DATA: Update went fine but no data was retrieved, for example because the server indicated that there was not update since the last request.
- ERROR: Something went wrong during the update
Parameters
Supported Filters
Example
import asyncio from aiohttp import ClientSession from aio_geojson_geonetnz_quakes import GeonetnzQuakesFeed async def main() -> None: async with ClientSession() as websession: # Home Coordinates: Latitude: -41.2, Longitude: 174.7 # MMI: 2 # Filter radius: 200 km # Filter minimum magnitude: 2.5 feed = GeonetnzQuakesFeed(websession, (-41.2, 174.7), mmi=2, filter_radius=200, filter_minimum_magnitude=2.5) status, entries = await feed.update() print(status) print(entries) asyncio.get_event_loop().run_until_complete(main())
Feed entry properties
Each feed entry is populated with the following properties:
Feed Manager
The Feed Manager helps managing feed updates over time, by notifying the consumer of the feed about new feed entries, updates and removed entries compared to the last feed update.
- If the current feed update is the first one, then all feed entries will be reported as new. The feed manager will keep track of all feed entries' external IDs that it has successfully processed.
- If the current feed update is not the first one, then the feed manager will produce three sets:
- Feed entries that were not in the previous feed update but are in the current feed update will be reported as new.
- Feed entries that were in the previous feed update and are still in the current feed update will be reported as to be updated.
- Feed entries that were in the previous feed update but are not in the current feed update will be reported to be removed.
- If the current update fails, then all feed entries processed in the previous feed update will be reported to be removed.
After a successful update from the feed, the feed manager provides two different dates:
last_updatewill be the timestamp of the last update from the feed irrespective of whether it was successful or not.
last_update_successfulwill be the timestamp of the last successful update from the feed. This date may be useful if the consumer of this library wants to treat intermittent errors from feed updates differently.
last_timestamp(optional, depends on the feed data) will be the latest timestamp extracted from the feed data. This requires that the underlying feed data actually contains a suitable date. This date may be useful if the consumer of this library wants to process feed entries differently if they haven't actually been updated.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/aio-geojson-geonetnz-quakes/ | CC-MAIN-2021-49 | refinedweb | 560 | 53.41 |
Introduction to Functional Programming
Content
- Recursion
- High order function
- Closure & lexical scope
- Partial application & currying
- Delay evaluation
Intro: Put an elephant into a fridge
Imperative paradigm:
print 'open the fridge' print 'put elephant into the fridge' print 'close the fridge'
Put an elephant into a fridge
OO paradigm:
class Fridge: def open(self): print 'open the fridge' def put(self, something): print 'put %s into the fridge' % something def close(self): print 'close the fridge' fridge = Fridge() fridge.open() fridge.put('elephant') fridge.close()
Put an elephant into a fridge
Functional paradigm:
def open(something): print 'open the %s' % something return something def put(object, container): print 'put %s into the %s' % (object, container) return container def close(something): print 'close the %s' % something return something
close(put('elephant', open('fridge')))
1. Recursion
In order to understand recursion,
you must first understand recursion.
Why recursion?
Wikipedia:
In computer science, functional programming is a programming paradigm, a style of building the structure and elements of computer programs, that:
- Treats computation as the evaluation of mathematical functions
- Avoid state and mutable data
Treats computation as the evaluation of mathematical functions
Functions: take in values and return values
Statements do something:
>>>i = i + 1 >>>
Use statements? No!
Expression produce value:
>>>i + 1 43 >>>
Use expression? Yes!
Avoid state and mutable data
Loops:
def fac(n): n = 1 #mutation for i in range(n): #states n = n * (i + 1) #mutation return n
Recursion:
def fac(n): if n == 0: return 1 else: return n * fac(n - 1)
So how to write recursion?
How to compute exponential of a given number?
b^n = b * b * b ... * b (b occurs n times)
So:
def exp(b, n): result = 1 for _ in range(n): result *= b return result
So how to write recursion?
How to compute exponential of a given number?
b^n = b * b^(n - 1)
b^0 = 1
So:
def exp(b, n): if n: return b * exp(b, n - 1) else: return 1
So how to write recursion?
How to compute exponential more efficiently?
b^n = b^(n/2) * b^(n/2) (if n is even) b^n = b * b^(n-1) (if n is odd) b^0 = 1
So
def exp(b, n): if n%2: return b * exp(b, n - 1) elif n: return exp(b, n/2) * exp(b, n/2) else: return 1
Another example
Fibonacci sequence
def fib(n): if n < 2: return n else: return fib(n - 1) + fib(n - 2)
However, this solution has a big problem
If we trace call chain of recursive fib(n)
fib(4) called [#1] --fib(3) called [#2] ----fib(2) called [#3] ------fib(1) called [#4] ------fib(1) returned 1 [#4] ------fib(0) called [#5] ------fib(0) returned 0 [#5] ----fib(2) returned 1 [#3] ----fib(1) called [#6] ----fib(1) returned 1 [#6] --fib(3) returned 2 [#2] --fib(2) called [#7] ----fib(1) called [#8] ----fib(1) returned 1 [#8] ----fib(0) called [#9] ----fib(0) returned 0 [#9] --fib(2) returned 1 [#7] fib(4) returned 3 [#1]
fib(4) called fib() 9 times
fib(5) called fib() 15 times
fib(6) called fib() 25 times
...
Tail Recursion
Convert recursive process to iterative process.
def fib(n): def fib_iter(a, b, n): if n: return fib_iter(b, a + b, n - 1) else: return a return fib_iter(0, 1, n)
- track state by function arguments
- reduce length of call stack and improve efficiency
(not supported in python yet)
2. First class
function
First class function
First class function means functions are treated as first class element by programming language. Like objects in OOP.
Privileges of first class elements includes
- They may be named by variables.
- They may be passed as arguments to functions.
- They may be returned as the results of procedures.
- They may be included in data structures.
Functions can be named by variables
- Anonymous function:
>>> (lambda x: x * x)(2) 4
- Function s can be assigned to variables:
>>> square = lambda x: x * x >>> square(2) 4
- Or make alias easily:
>>> sq = square >>> sq(2) 4
Function passed as arguments
- The well known build in functions
- map(), filter(), reduce()
- sorted()
>>> name = ['Leonardo DiCaprio', 'Johnny Depp', 'Tom Cruise'] >>> sorted(name) #sort by full name ['Johnny Depp', 'Leonardo DiCaprio', 'Tom Cruise'] >>> sorted(name, key = lambda x: x.split(' ')[1]) #sort by last name ['Tom Cruise', 'Johnny Depp', 'Leonardo DiCaprio'] >>> sorted(name, key = lambda x: len(x)) #sort by name length ['Tom Cruise', 'Johnny Depp', 'Leonardo DiCaprio']
- get_cursors_if
get_cursors_if(source, satisfy_func, transform_func)
Function as return value
- A tiny example:
>>> def addn(n): def add(m): return m + n return add >>> add2 = addn(2) >>> add2(3) 5
- Memoization
Memoization
def fib(n): if n < 2: return n else: return fib(n - 1) + fib(n - 2)
Recall the recursive version of fib(),it is in very low efficient because of redundant computation. What if it can store the computed result?
Memoization
def memoize(f): cache = {} def g(x): if x not in cache: cache[x] = f(x) return cache[x] return g fib = memoize(fib)
>>> fib(5) fib(5) called [#1] fib(4) called [#2] fib(3) called [#3] fib(2) called [#4] fib(1) called [#5] fib(1) returned 1 [#5] fib(0) called [#6] fib(0) returned 0 [#6] fib(2) returned 1 [#4] fib(3) returned 2 [#3] fib(4) returned 3 [#2] fib(5) returned 5 [#1]
Trace
High order function that can trace function call.
__report_indent = [0] def trace(fn): def wrap(*params,**kwargs): call = wrap.callcount = wrap.callcount + 1 indent = ' ' * __report_indent[0] fc = "%s(%s)" % (fn.__name__, ', '.join( [a.__repr__() for a in params] + ["%s = %s" % (a, repr(b)) for a,b in kwargs.items()] )) print "%s%s called [#%s]"\ % (indent, fc, call) __report_indent[0] += 1 ret = fn(*params,**kwargs) __report_indent[0] -= 1 print "%s%s returned %s [#%s]"\ % (indent, fc, repr(ret), call) return ret wrap.callcount = 0 return wrap
Decorator
Special syntax in Python:
fib = memorize(fib)
can be written as:
@memorize def fib(n): ...
3. Closure
and lexical scope
Closure
- A closure, like an object instance, is a way of carrying around a bundle of data and functionality, wrapped up together.
- Lexical scope is a nature way to implement closure.
Closure
def addx(x): def func(y): return x + y return func foo = addx(5) x = 3 >>>foo(10) 15 # in lexical scope, 5 is bind to x in foo
In lexical scope, the body of a function is evaluated in the environment where the function is defined, not the environment where the function is called. By working in this way, it binds data with functions.
Alternative to closure
#Argument passing
def addx(x): def func(a, b = x): return a + b return func >>> a = addx(2) >>> a(3) 5
#use object
class addx(): def __init__(self, x): self.x = x def __call__(self, y): return self.x + y >>> a = addx(2) >>> a(3) 5
Anything done with closure can be done without closure.
But closure provides a more clear and simple solution.
Alternative to closure
The venerable master was walking with his student. The student said "Master, I have heard that objects are a very good thing - is this true?" Master looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures."
On his next walk with master, student said "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Master responded by hitting student with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, the student became enlightened.
Special notice for python
Reassign closure variable of immutable type inside function will lead to unexpected result.
def counter(): count = 0 def func(): count += 1 return count return func >>> count = counter() >>> count() Traceback (most recent call last): File "
", line 1, in count() File "C:/Users/JC/Desktop/a.py", line 152, in func count += 1 UnboundLocalError: local variable 'count' referenced before assignment
Special notice for python
A common solution is to replace immutable objectwith mutable object
def counter(): count = [0] def func(): count[0] += 1 return count[0] return func >>> count = counter() >>> count() 1 >>> count() 2
4. Partial application and
Currying
Partial application
Partial function application is about fixing some arguments of a given function to yield another function with fewer arguments.
get_cursors_if
get_cursors_if(source, satisfy_func, transform_func)
get_cursor_names_if = partial(get_cursors_if,
transform_func = lambda c: c.displayname)
get_class_cursor = partial(get_cursors_if,
satisfy_func = lambda c: c.kind == CursorKind.CXX_CLASS,
transform_func = lambda c: c)
Partial application
Consider the following Ackermann function which computes hyperoperations of 2.
def Ackermann(x, y): if not y: return 0 elif not x: return 2 * y elif y == 1: return 2 else: return A(x - 1, A(x, y - 1))
Hyperoperations
Partial application
Ackermann(x, y) computes y times hyper(x - 2) of base 2
>>> Ackermann(0, 4) 8 #hyper 2 multipication 2 * 4 >>> Ackermann(1, 4) 16 #hyper 3 exponentiation 2 ^ 4
>>> Ackermann(2, 3) 16 #hyper 4 tetration 2 ^^ 3 >>> Ackermann(2, 4) 65536 # 2 ^^ 4
Partial application
- With partial application, we can generate functions to compute hyper n
def partial_Ack(n): def func(b): return Ackermann(n, b) return func
multiply2 = partial_Ack(0) exp2 = partial_Ack(1) teration2 = partial_Ack(2)
- partial() function provided by functools provides a move convenient way for partial application
from functools import partial multiply = partial(Ackermann, 0)
Partial application
Why partial application?
- Convenience
- Bind data with function to build more specific function
- Encapsulation and detail hidding
Currying
Transforming a function that takes multiple arguments (or a tuple of arguments) in such a way that it can be called as a chain of functions
def func(a, b, c): return a + b + c def curried_func(a): return lambda b: lambda c: a + b + c >>> print func('a', 'b', 'c') abc >>> print curried_func('a')('b')('c') abc
Currying
- Currying is close related to partial application
- Currying can transfer multiple argument function to a chain of single argument function which is very useful in old days when functions can only take in one argument.
- Nowadays, currying is generally considered as language support for partial application. It can be replaced by partial application, thus it is seldom used in languages that doesn't support currying.
- A Haskell example. Haskell has language syntax that support currying
computation a b c d = (a + b^2+ c^3 + d^4)
fillOne = computation 1 fillTwo = fillOne 2 fillThree = fillTwo 3 answer = fillThree 5 -- Result: answer == 657
5. Delay evaluation
Implementing my_if()
Python do not support ternary operator like ( ? : ) in C++, what if we want to write if statement in one line?
one solution: use "and" and "or"
#a ? b : c (a and b) or c
an other solution: implement an my_if function
#a ? b : c my_if(a, b, c)
Implementing my_if()
def my_if(a, b, c):
if a:
return b
else:
return c
This solution looks just like syntax sugar of if statement.
However, it does not work.
Implementing my_if()
Consider the following situation:
def bad_exp(): while True: pass
The If statement works well:
if True: good_exp() else: bad_exp()
But my_if() will run into infinity loop:
my_if(True, good_exp(), bad_exp())
Because function arguments will be evaluated first.
Delay evaluation
In order to delay evaluation of statements, we can wrap statements into functions. Functions are evaluated to themselves and let function call evaluated to statements.
Modified my_if():
def my_if(a, b, c): if a: return b() else: return c()
To call my_if():
my_if(a, lambda : b, lambda : c)
Delay evaluation
Delay the evaluation of parameter by putting it into a wrapper function. Then call the function to get the value of parameter. The wrapping process is called ‘thunk’.
However, if use delay evaluation, parameter will be computed every time when it is used.
Stream
If we need a sequence but we don't know exactly how long is needed. We can delay the evaluation of sequence, and evaluate it one by one when needed. This kind of object is called stream
def fib(): def stream(a, b): return (a, lambda : stream(a + b, a)) return stream(1, 0)
above if a stream than generates Fibonacci numbers. Every time when fib() is called, it generate a tuple of fib number and delayed fib() for next element.
Stream
To get first element of stream:
>>> fib()[0] 1
To get the second one:
>>> fib()[1]()[0] 1
To get the nth:
def get_nth(stream, n): val, next_str = stream() if n != 1: return get_nth(next_str, n - 1) else: return val
>>> get_nth(fib, 10) 55
Stream example: twin prime
Twin prime means two prime that one prime is different with the other by two, for example (3, 5), (17, 19). There is said to have infinity number of twin primes.
def isprime(n): for x in xrange(2, int(n**0.5)+1): if n % x == 0: return False return True def prime(): def stream(n): if isprime(n): return (n, lambda : stream(n + 1)) else: return stream(n + 1) return stream(2)
Stream example: twin prime
def twin_prime(): def stream(n): if isprime(n) and isprime(n + 2): return ((n, n + 2), lambda : stream(n + 1)) else: return stream(n + 1) return stream(2) >>> for i in range(1, 5): . . . print get_nth(twin_prime, i) (3, 5) (5, 7) (11, 13) (17, 19)
Generator
Generator object provided by python is also very suitable for generating sequence of infinity length.
def gen_prime(): n = 2 def gen(n): if isprime(n): return n else: return gen(n + 1) while True: n = gen(n) yield n n += 1
Generator
Generation version of twin_prime:
def gen_twin_prime(): prime = gen_prime() a = prime.next() def gen(a): b = prime.next() if b - a == 2: return (a, b) else: return gen(b) while True: tmp = gen(a) a = tmp[1] yield tmp
Ideas of stream and generator are almost the same.
Generator has more language support.
PF and OOP
Let's go back to the fridge example.
Suppose we want to put elephant into the zoo.
OOP:
class Zoo: def open(self): ... def put(self, object): ... def close(self): ...
PF and OOP
Let's go back to the fridge example.
Suppose we want to put elephant into the zoo.
FP:
def open(something): if something == 'fridge': ... elif something == 'zoo': ... def put if...
def close
if...
PF and OOP
Now, we want to add clean() method for fridge and zoo
OOP:
class Fridge: ... def clean(self): print 'clean the fridge' class Zoo: ... def clean(self): print 'clean the zoo'
PF and OOP
Now, we want to add clean() method for fridge and zoo
FP:
def clean(something): if something == 'fridge': print 'clean the fridge' elif something == 'zoo': print 'clean the zoo' return something
PF and OOP
Intro to FP
By Jingchuan Chen
Intro to FP | https://slides.com/jingchuanchen/intro-to-fp | CC-MAIN-2021-31 | refinedweb | 2,458 | 56.89 |
UnityScript versus JavaScript
Latest revision as of 20:49, 21 November 2018
Note: This page attempts to explain the differences between JavaScript (ECMAScript) and UnityScript as succinctly and clearly as possible. If you have any suggestions, feel free to add them to the author's talk page.
[edit] Overview
[edit] Terminology
It is not uncommon for Unity developers and even members of Unity Technologies to refer to Unity's JavaScript like language as simply "JavaScript", as if it was equivalent or interchangeable with what most people know of as JavaScript on the web. However, the two are actually very different languages. Although they do resemble each other syntactically they have very different semantics. While "JavaScript" is merely a generic name and could refer to any one of many implementations of the ECMAScript specification, Unity's "JavaScript" language doesn't even come close to conforming to that specification — nor does it try to. It's a proprietary language and it doesn't actually follow any concrete specification of standard JavaScript and is modified at will by the engine developers.
Because of this, the vast majority of JavaScript libraries you find will not work by default in Unity. Unity's "JavaScript" is most similar to Microsoft's JScript.NET, although it is not quite identical. Thus many developers prefer to call the language Unity uses "UnityScript" instead, to help differentiate it. Some people consider this to be "just semantics", but when people call both "JavaScript" it becomes a lot harder to search the nternet for solutions and help related to Unity. It's also quite a lot of trouble to continuously specify whether one is referring to "real" JavaScript or Unity's version. So it's best to just stick with "JavaScript" for real JavaScript and "UnityScript" for Unity's language.
[edit]."
[edit].
[edit] Syntax
[edit] (pretty much after everything):
- After
return,
continue, or
breakstatements.
- After expression statements.
transform.Translate(0, 0, 5);
- After variable declarations.
- After variable assignment.
- After a bodiless method declaration (such as in an interface).
- Between the parameters in a
forloop.
[edit] One variable declaration at a time
JavaScript supports multiple variable declarations in one
var statement.
var x = 3, y = 4;
UnityScript does not.
[edit]
[edit] No Global Variables
Every top-level variable in JavaScript is global. Additionally, any variable declaration not preceded by the
var statement is automatically scoped to be global. This is not the case in UnityScript; there are not really any global variables in UnityScript per sé.
[edit].
[edit]."
[edit] No Bling
Dollar signs ($) are not allowed in UnityScript identifiers as they are in JS identifiers. (In JS, the symbol is often used for c-style namespacing or as the name of a do-everything function.)
var lib$cosine = 3; // ERROR! in UnityScript
[edit] No
with statement
There is no
with statement in UnityScript. This is probably for the best, as JavaScript's
with statement causes the whole language to be slower, regardless of whether the statement is used or not. It is also considered harmful.
[edit] UnityScript has .NET's OOP features
UnityScript supports classes, as well as "protection levels" (public, private, protected) and "static" keyword options. Since it also supports explicit typing, it also has support for "generics" (runtime type enforcement), which JavaScript has no notion of.
[edit] No
delete in UnityScript
JavaScript allows a way for you to remove declared variables from the namespace. UnityScript doesn't.
[edit].
[edit]". | https://wiki.unity3d.com/index.php?title=UnityScript_versus_JavaScript&diff=cur&oldid=16607&printable=yes | CC-MAIN-2020-34 | refinedweb | 569 | 55.64 |
/* ** (c) COPYRIGHT MIT 1995. ** Please first read the full copyright statement in the file COPYRIGH. */
The SSL Reader Stream is an input stream which knows how to read from a SSL socket layer, for example provided by the OpenSSL Library. It is a libwww transport and may be registered using the Transport Manager. The application can initialize this stream together with the HTSSLWriter stream, for example. This module requires a SSL library in order to link/compile.
This module is implemented by HTSSLReader.c, and it is a part of the W3C Sample Code Library.
The module is contributed by Olga Antropova
#ifndef HTSSLREADER_H #define HTSSLSSLReader_new;
#endif /* HTSSLREADER_H */ | http://www.w3.org/Library/src/SSL/HTSSLReader.html | CC-MAIN-2014-15 | refinedweb | 108 | 67.96 |
dynamic dns
Hi guys, Running openstack havana in a multi node testing setup, instances are getting created and allocated ip address fine, and host names are being assigned to the nodes based on the instance name given in horizon when creating the instance. example of hostname on vm:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fa:16:3e:ae:00:4f brd ff:ff:ff:ff:ff:ff inet 192.168.0.11/24 brd 172.16.214.255 scope global eth0 inet6 fe80::f816:3eff:feae:4f/64 scope link tentative dadfailed valid_lft forever preferred_lft forever [root@vm-5 ~]# hostname vm-5
However I would like to be able to look up these host names dynamically without having to configure dnsmasq manually. I notice in my /var/lib/neutron/dhcp/<network-namespace-id>/host file the host names are listed but they are different to the ones set on the vm:
<mac>,host-192-168-0-5.testvm,192.168.0.5 <mac>,host-192-168-0-6.testvm,192.168.0.6 <mac>,host-192-168-0-7.testvm,192.168.0.7 <mac>,host-192-168-0-8.testvm,192.168.0.8 <mac>,host-192-168-0-11.testvm,192.168.0.11
this network name space has a dnsmasq instance running in it at ip 192.168.0.5 and i can nslookup the host names based on the ipaddress by querying it
nslookup 192.168.0.11 192.168.0.5 Server: 192.168.0.5 Address: 192.168.0.5#53 11.0.168.192.in-addr.arpa name = host-192-168-0-11.testvm.my.net.
Is there any way I can get dnsmasq to set the host name given to the vm in its host file? If I could do this then I could just point a forwarding dns server at 192.168.0.5 address and all vms would be able to be looked up by there actual host name. | https://ask.openstack.org/en/question/27443/dynamic-dns/ | CC-MAIN-2021-04 | refinedweb | 339 | 62.68 |
kldiv_loss¶
paddle.fluid.layers.
kldiv_loss(x, target, reduction='mean', name=None)[source]
This operator calculates the Kullback-Leibler divergence loss between Input(X) and Input(Target). Notes that Input(X) is the log-probability and Input(Target) is the probability.
KL divergence loss is calculated as follows:
$$l(x, y) = y * (log(y) - x)$$
While \(x\) is Input(X) and \(y\) is Input(Target).
While
reductionis
none, output loss is in the same shape as Input(X), loss in each point is calculated seperately and no reduction is applied.
While
reductionis
mean, output loss is in shape of [1] and loss value is the mean value of all losses.
While
reductionis
sum, output loss is in shape of [1] and loss value is the sum value of all losses.
While
reductionis
batchmean, output loss is in shape of [1] and loss value is the sum value of all losses divided by batch size.
- Parameters
x (Variable) – The input tensor of KL divergence loss operator. This is a tensor with shape of [N, *], where N is the batch size, * means any number of additional dimensions. The data type is float32 or flaot64
target (Variable) – The tensor of KL divergence loss operator. This is a tensor with shape of Input(X). The data type is same as Input(X)
reduction (Variable) – The reduction type to apply to the output, available types are ‘none’ | ‘batchmean’ | ‘mean’ | ‘sum’, ‘none’ for no reduction, ‘batchmean’ for the sum of output divided by batch size, ‘mean’ for the average value of all output, ‘sum’ for the sum of the output
name (str, optional) – For detailed information, please refer to Name. Usually name is no need to set and None by default.
- Returns
The KL divergence loss. The data type is same as input tensor
- Return type
Variable(Tensor)
Examples
import paddle.fluid as fluid x = fluid.data(name='x', shape=[None,4,2,2], dtype='float32') target = fluid.layers.data(name='target', shape=[4,2,2], dtype='float32') loss = fluid.layers.kldiv_loss(x=x, target=target, reduction='batchmean') | https://www.paddlepaddle.org.cn/documentation/docs/en/api/layers/kldiv_loss.html | CC-MAIN-2021-04 | refinedweb | 341 | 55.54 |
"Serge E. Hallyn" <serge@hallyn.com> writes:> Quoting Eric W. Biederman (ebiederm@xmission.com):>> >>. >> But given that namespaces are making it upstream, what else is to be> gained from the bsdail module? What exactly are you looking for?Good question. I keep tripping over the LSM hooks, and I have thedistinct impression that part of the current contention and lack ofagreement is simply the way things are current factored. So I'mputting for a constructive suggestion that has the possibility ofgoing somewhere.> 1. are you looking to cover all the corner cases - i.e. prevent killing> a process in another namespace through F_SETOWN or mqueue, etc?I'm looking towards this yes. There are times when we deliberatelyallow mixing of things by the definition of what namespaces are andthere are some use cases where people don't want this.> 2. are you looking for a potentially easier fix to the current absence> of isolation in the user namespace?No. I'm not even worrying about the user namespace until it resemblescomplete. Currently I just view it as a stub because as is, thesecurity namespace is pretty much useless for any case I think about.We still have way to many cases where the kernel treats differentnames as the same name.> 3. are you just generally looking to make lsm/selinux easier for> yourself to configure?Well. I'm trying to make the LSM more useful to hack on and configure,and much less contentions for ordinary people to use.There is one issue with sockets that has come up where there arepeople who really want to filter things at connect and bind time.The LSM is so inflexible the only sane suggestion at the time wasto duplicate the LSM hooks and add an new iptable style tablefor making that decision.Also I'm thinking towards what do we have to do isolate the securitymodule stuff in the context of a namespace. So that a person ina container can setup their own rules that further restrict thesystem.So far I'm not ready to do anything yet but I'm keeping a weather eyeon the situation so I have a clue what I'm go.> If 1, an selinux policy should cover you. So you can then skip to 3.> Or, alternatively, I do plan - as soon as my free time clears up a bit -> on demonstrating how to write some selinux policy to create a secure> container based on current -mm + your experimental network namespace> patches.Thanks that sounds interesting.> If 3, then selinux policy modules may actually help you, else either> a new LSM (maybe like LIDS) or a userspace tool which is a front-end to> selinux policy, emulating the iptables rules formats, may be what you> want?I don't want to have to choose my LSM at compile time. I want toadd support into the kernel at compile time and be able to configureit before I go multi-user. I know this kind of architecture isachievable because iptables allows it.When I conceive as the security modules as just a firewall betweenapplications on my own box I think, oh yeah this is no big deal,I might want to limit something that way some time. These are justsome additional rules on when to return -EPERM. So I ask myself whyis this situation much less flexible and much harder to use then ournetwork firewall code?>> My impression is that selinux is one monolithic blob that doesn't>> allow me to incrementally add matching or action features that I>> find interesting.>> Actually with policy modules it gets much much better. I have in fact> been able to pretty easily write a short policy module to, say, create> an selinux user which ran as root and had full access to the system to> do system setup for automated testing. There is a learning curve in> having to look at existing modules for maybe a few days to get started,> but once you get started the policy modules do make it very easy to> add to current policy.Ok. Interesting. Are these kernel modules?Still while I get the general impression that selinux seems to bevery close to a generic solution, and that selinux more or less hasthe architecture we might want. I don't get the impression thatselinux does this at a level that is open to other people doinginteresting things.So I still ask the question can we move this functionality down tothe LSM in a way that will solve the composition problem betweenmultiple security modules?It really seems to me that the LSM as currently structured createsa large barrier to entry for people who have just this little thingthey want to do that is not possible with any existing securitymodule.Eric-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2007/10/8/102 | CC-MAIN-2016-18 | refinedweb | 822 | 61.16 |
DESIGN PATTERNS
Concurrency Patterns
Overview
This page focuses on concurrency patterns applicable to embedded software, that is software running on anything from an 8-bit microcontroller to embedded Linux in either C or C++.
The terms thread 1 and thread 2 are used as imaginary threads that need to share data in the following examples. These threads are treated as persistent threads which each carry out sporadic, important tasks, rather than a master/worker thread arrangement.
Some people, when confronted with a problem, think, “I know, I’ll use threads!” Now they have 10 problems. –Bill Schindler
Message Queues
Example
#include <iostream> #include <memory> #include <string> #include <thread> #include "ThreadSafeQueue.hpp" using namespace mn::CppUtils; struct Cmd { std::string name; std::shared_ptr<void> data; }; // We require a thread-safe queue, which is not part of the standard! // See ThreadSafeQueue<Cmd> queue_; void ThreadFn() { Cmd cmd; while(true) { queue_.Pop(cmd); std::cout << "Received command. cmd.name = " << cmd.name << std::endl; if(cmd.name == "CMD_1") { auto data = std::static_pointer_cast<std::string>(cmd.data); // Cast back to exact data type std::cout << "Received data (as string) = \"" << *data << "\"" << std::endl; } else if(cmd.name == "CMD_2") { // Do stuff with cmd.data } else if(cmd.name == "QUIT") { // Break from infinite while loop, which will mean that // this function will return and then thread.join() will // return break; } else throw std::runtime_error("Command name not recognized."); } } int main() { std::thread t(ThreadFn); auto data = std::shared_ptr<std::string>(new std::string("hello")); Cmd cmd; cmd.name = "CMD_1"; cmd.data = std::static_pointer_cast<void>(data); // Cast away the exact data type queue_.Push(cmd); cmd.name = "QUIT"; cmd.data = nullptr; // Some commands may not need data! queue_.Push(cmd); t.join(); }
Run this example online at.
Pros/Cons
- Easier to prove that deadlocks do not exist
- Thread 1 cannot easily get data back from thread 2, as the only way to communicate is through the message queues. Thread 1 would have send a message to thread 2 requesting data, and then thread 2 would have to send a message back to thread 1 with the data. This can break the “flow” of the code for thread 1.
- Difficulties in safely handling multiple “types” of data sent on the message queue. The example above creates the data on the heap, creates a shared pointer to it and then casts away the type to
std::shared_ptr<void>. You then have to make sure the receiving thread casts back to the correct type depending on the message.
Synchronization Objects
The most basic form of synchronization object is a mutex. When using C, popular operating systems such as FreeRTOS or Linux provide OS specific mutexes. If using C++ and have the standard library available, you can use
std::mutex (as of C++11).
Why do we have to use synchronization objects? Because if more than one threads happens to write to the same memory at the same time, we run into problems.
Pros/Cons
- Enables thread 1 to call a standard public function belonging to thread 2, along with all the benefits that go along with this such as type-safe input arguments and return arguments.
- Threads still require some notification object to block on
- It is harder for thread 1 to tell thread 2 to do some “work”. Whilst in a message queue system thread 1 can just thread 2 a “do work” message,
A Hybrid Approach
What if we used a message queue for the sending thread to tell the receiving thread to perform some work, and a synchronization object when sending thread just wants to access some data from receiving thread?
This is possible with the use of a message queue for incoming messages and a synchronization object to synchronize the receiving message loop with the data accesses.
A Message Queue That Can Wait For Return Data
One way to solve the “no return data” issue with message queues is for the receiving thread to send a thread-safe data object along with the rest of the message data to the sending thread. When this message is processed in receiving thread, it calculates the return data and notifies the sending thread with the thread-safe data object.
This can be implemented in C++ by using
std::future and
std::promise. These are synchronization objects that allow data to be transmitted between threads. The below code shows an example of this. Note how the
main() function can call
thread1.SetData() and then block and wait for return data by calling
thread1.GetData(). Both of these calls result in messages arriving in thread 1’s message queue.
/// /// \file MsgQueueTests.cpp /// \author Geoffrey Hunter <gbmhunter@gmail.com> /// \edited n/a /// \created 2017-10-24 /// \last-modified 2017-10-25 /// \brief Contains tests for the MsgQueue class. /// \details /// // System includes #include <future> #include <iostream> #include <memory> #include <thread> // User includes #include "MsgQueue.hpp" using namespace mn::CppUtils::MsgQueue; class Thread1 { public: Thread1() { thread_ = std::thread(&Thread1::Process, this); } ~Thread1() { if(thread_.joinable()) { queue_.Push(TxMsg("EXIT")); thread_.join(); } } void SetData(std::string data) { auto dataOnHeap = std::make_shared<std::string>(data); queue_.Push(TxMsg("SET_DATA", dataOnHeap)); } std::string GetData() { TxMsg msg("GET_DATA", ReturnType::RETURN_DATA); queue_.Push(msg); auto retVal = msg.WaitForData(); return *std::static_pointer_cast<std::string>(retVal); } private: void Process() { RxMsg msg; // This loop can be broken by sending the "EXIT" msg! while(true) { queue_.Pop(msg); //==============================================// //============= MSG PROCESSING LOOP ============// //==============================================// if(msg.GetId() == "SET_DATA") { auto data = std::static_pointer_cast<std::string>(msg.GetData()); // Cast back to exact data type data_ = *data; } else if(msg.GetId() == "GET_DATA") { auto retData = std::make_shared<std::string>(data_); msg.ReturnData(retData); break; } else if(msg.GetId() == "EXIT") { // Break from infinite while loop, which will mean that // this function will return and then thread.join() will // return break; } else throw std::runtime_error("Command name not recognized."); } } std::thread thread_; MsgQueue queue_; std::string data_; }; int main() { Thread1 thread1; std::cout << "Sending \"Hello\" data to thread1." << std::endl; thread1.SetData("Hello"); auto returnedData = thread1.GetData(); std::cout << "Returned data from thread1 = \"" << returnedData << "\"." << std::endl; return 0; }
The above code can be run online at. | https://blog.mbedded.ninja/programming/design-patterns/concurrency-patterns/ | CC-MAIN-2021-31 | refinedweb | 1,012 | 57.67 |
The "House of Cards" video was the first music video to be premiered by Google. It launched on July 11, 2008. The Google site includes some of the video's data, so that you may create your own visualizations, as well as a 3-D data visualization tool. Google's Creative Lab developed the site.
The visualization tool was written in Flash by myself and my friend Aaron Meyers. It allows the viewer to rotate the point cloud in real time while the video is playing. To me, this is where the data becomes truly beautiful. The Flash application allows you to look at parts of the video from any angle you want in real time, something traditional video recording will never allow. You may even turn Thom Yorke's face so that it faces away from you, effectively holding his face as a mask up to yours and allowing you to look through his eyes. This effect is very powerful, in my opinion. It makes the music video tangible in a way I doubt many people have experienced before.
We also released some of the data itself—making it open source—along with a video creation tool written in the Processing programming language. We then encouraged people to download the data and create their own videos.
I want to share the source code for the video creation tool to show you how easy it is to create your own version of the video in Processing. This is the code that outputs frames of Thom Yorke singing:
import processing.opengl.*; int frameCounter = 1; //Declare a variable to store which frame we're ... | https://www.safaribooksonline.com/library/view/beautiful-data/9780596801656/ch10s08.html | CC-MAIN-2016-50 | refinedweb | 272 | 71.65 |
比特攻¶
Overview¶
Simply put, it is to use the relationship between the bits to attack.
2018 Plaid CTF transducipher¶
The title is as follows
#!/usr/bin/env python3.6 import BLOCK_SIZE = 64 T = [ ((2, 1), 1), ((5, 0), 0), ((3, 4), 0), ((1, 5), 1), ((0, 3), 1), ((4, 2), 0), ] def block2bin(b, length=BLOCK_SIZE): return list(map(int, bin(b)[2:].rjust(length, '0'))) def bin2block(b): return int("".join(map(str, b)), 2) def transduce(b, s=0): if len (b) == 0: return b d, t = T[s] b0, bp = b[0], b[1:] return [b0 ^ t] + transduce(bp, s=d[b0]) def transduceblock(b): return bin2block(transduce(block2bin(b))) def swap(b): l = BLOCK_SIZE // 2 m = (1 << l) - 1 return (b >> l) | ((b & m) << l) class Transducipher: def __init__(self, k): self.k = [k] for i in range(1, len(T)): k = swap(transduceblock(k)) self.k.append(k) def encrypt(self, b): for i in range(len(T)): b ^= self.k[i] b = transduceblock(b) b = swap(b) return b if __name__ == "__main__": flag = bytes.hex(os.urandom(BLOCK_SIZE // 8)) k = int(flag, 16) C = Transducipher(k) print("Your flag is PCTF{%s}" % flag) with open("data1.txt", "w") as f: for i in range(16): pt = int(bytes.hex(os.urandom(BLOCK_SIZE // 8)), 16) ct = C.encrypt(pt) f.write(str((pt, ct)) + "\n")
The topic gave 16 groups of ciphertext pairs.
- Clear text size 8 bytes
- cipher text size 8 bytes
- The key size is also 8 bytes
The key we need to solve is the key.
It can be seen that there are two main operations here.
- swap
def swap(b): l = BLOCK_SIZE // 2 m = (1 << l) - 1 return (b >> l) | ((b & m) << l)
Swaps the upper 32 bits of the given data with the lower 32 bits.
- transduce
T = [ ((2, 1), 1), ((5, 0), 0), ((3, 4), 0), ((1, 5), 1), ((0, 3), 1), ((4, 2), 0), ] def transduce(b, s=0): if len (b) == 0: return b d, t = T[s] b0, bp = b[0], b[1:] return [b0 ^ t] + transduce(bp, s=d[b0])
among them,
- b is an array of 01 with an initial time size of 64.
- s is a subscript.
The basic process is as follows
- Select which element of T to use based on s and divide it into d and t.
- Divide b into two parts, one containing only the head element and the other containing the other elements.
- XOR the header element with t as the current header element and continue to convert the rest.
In fact, we can convert this function into an iterative function.
def transduce_iter(b, s=0): ans = [] for c in b: d, t = T[s] years + = [ct] s = d[c] return years
And since each time the first element of the list is processed, the function is actually reversible, as follows
def invtransduce(b, s=0): if len (b) == 0: return b d, t = T[s] b0, bp = b[0], b[1:] return [b0 ^ t] + transduce(bp, s=d[b0 ^ t])
The following is the core flow of the analysis program. The first is to generate the key part. The encryption algorithm generates 6 keys, each time the method is generated.
- transduce the previous key to get the intermediate value t
- Swap t
- Continuous iteration 5 times
def __init__(self, k): self.k = [k] for i in range(1, len(T)): k = swap(transduceblock(k)) self.k.append(k)
The encryption algorithm is as follows, a total of 6 iterations, the basic process
XOR key transduce 2. Exchange
def encrypt(self, b): for i in range(len(T)): b ^= self.k[i] b = transduceblock(b) b = swap(b) return b
Through the analysis program, it can be known that the encryption algorithm is a block encryption, and the basic information is as follows
- Block size is 8 bytes
- Rounds of 6 rounds
- The basic operations of each round of the encryption algorithm are transduce and swap.
- The extension of the key is also related to transduce and swap.
more specific
- swap is to swap the upper 32 bits of the 8 bytes with the lower 32 bits.
- transduce is XORed to a value bit by bit for each bit of 8 bytes. This value is related to T.
Through further analysis, we can find that these two functions are all reversible. That is to say, if we know the final ciphertext, then we can actually shorten the original number of rounds to almost 5 rounds, because the last round of
transduce and
swap have no effect.
We can define the following variables
| k_{i,0} | The upper 32 bits of the key used in the i-th round | | k_{i,1} | The lower 32 bits of the key used in the i-th round | | d_{i,0} | The upper 32 bits of the input used by the i-th wheel | | d_{i,1} | The lower 32 bits of the input used by the i-th wheel |
Since one of the core operations is swap, only high or low 32 bits are manipulated, so we can consider it in two parts. The simplified definition is as follows
- Transduce is simplified to T, although it conflicts with the source code, but we can temporarily understand it.
- Swap is reduced to S.
Then each round of the ciphertext, the key is as follows
| 0 | k_{0,0} | d_{1,0}=T(k_{0,1} \oplus d_{0,1} ,s) | k_{0,1} | d_{1,1}=T(k_{0,0} \oplus d_{0,0}) |
| 1 | k_{1,0}=T(k_{0,1},s) | d_{2,0}=T(k_{1,1} \oplus d_{1,1} ,s) | k_{1,1}=T(k_{0,0}) | d_{2,1}=T(k_{1,0} \oplus d_{1,0}) |
| 2 | k_{2,0}=T(k_{1,1},s) | d_{3,0}=T(k_{2,1} \oplus d_{2,1} ,s) | k_{2,1}=T(k_{1,0}) | d_{3,1}=T(k_{2,0} \oplus d_{2,0}) |
| 3 | k_{3,0}=T(k_{2,1},s) | d_{4,0}=T(k_{3,1} \oplus d_{3,1} ,s) | k_{3,1}=T(k_{2,0}) | d_{4,1}=T(k_{3,0} \oplus d_{3,0}) |
| 4 | k_{4,0}=T(k_{3,1},s) | d_{5,0}=T(k_{4,1} \oplus d_{4,1} ,s) | k_{4,1}=T(k_{3,0}) | d_{5,1}=T(k_{4,0} \oplus d_{4,0}) |
| 5 | k_{5,0}=T(k_{4,1},s) | d_{6,0}=T(k_{5,1} \oplus d_{5,1} ,s) | k_{5,1}=T(k_{4,0}) | d_{6,1}=T(k_{5,0} \oplus d_{5,0}) |
Then, we can enumerate the upper 32 bits of k bit by bit and enumerate the possible s status bits when performing the T operation, so that we can get the high 32-bit key. After performing a bit-by-bit blast, we can get two possible results
[2659900894, 2659900895]
According to the results on the left, you can get the possible results on the right. The possible results obtained with 2659900894 are as follows:
# The first set of ciphertexts may have too many corresponding keys. # The second group has a total of 6. [2764038144, 2764038145, 2764038152, 2764038153, 2764038154, 2764038155] # The third group [2764038144, 2764038145]
Then in fact, we can manually try to encrypt all the ciphertext, if not, just judge the error directly. This can actually be filtered very quickly. Finally, you can find that the key is
2659900894|2764038145
That is 11624187353095200769. Also got the flag.
Of course, this problem can also use the attack method of the middle encounter, that is, the key used in the 0th round and the key used in the last round are respectively enumerated to make a collision in the third round. | https://wiki.x10sec.org/crypto/attack-summary/bit-attack/ | CC-MAIN-2021-04 | refinedweb | 1,317 | 66.07 |
Below are three functions that calculates a users holiday cost. The user is encouraged to enter details of his holiday which are then passed off into the functions as arguments.
def hotel_cost(days):
days = 140*days
return days
"""This function returns the cost of the hotel. It takes a user inputed argument, multiples it by 140 and returns it as the total cost of the hotel"""
def plane_ride_cost(city):
if city=="Charlotte":
return 183
elif city =="Tampa":
return 220
elif city== "Pittsburgh":
return 222
elif city=="Los Angeles":
return 475
"""this function returns the cost of a plane ticket to the users selected city"""
def rental_car_cost(days):
rental_car_cost=40*days
if days >=7:
rental_car_cost -= 50
elif days >=3:
rental_car_cost -= 20
return rental_car_cost
"""this function calculates car rental cost"""
user_days=raw_input("how many days would you be staying in the hotel?") """user to enter a city from one of the above choices"""
user_city=raw_input("what city would you be visiting?") """user to enter number of days intended for holiday"""
print hotel_cost(user_days)
print plane_ride_cost(user_city)
print rental_car_cost(user_days)
You need to convert the output of
raw_input to
int. This should work:
user_days=int(raw_input("how many days would you be staying in the hotel?")) """user to enter a city from one of the above choices"""
Note that if user enters anything but a number, this will raise an error. | https://codedump.io/share/iErdccGklO2E/1/why-can39t-i-pass-this-int-type-variable-as-an-argument-in-python | CC-MAIN-2017-34 | refinedweb | 228 | 53.24 |
Opened 9 years ago
Closed 2 years ago
#7074 closed Bug (worksforme)
MySQL error/warning when 'gt' field lookup with a datetime field and fulltext search.
Description (last modified by )
This query produces the following traceback:
Keyword.objects.filter(keyword__search=term, keyworddata__updated__gt=datetime.datetime.now(), keyworddata__source="1")
term = 'test' In [28]: Keyword.objects.filter(keyword__search=term, keyworddata__updated__gt=datetime.datetime.now(), keyworddata__source="1" ).select_related() Out[29]: --------------------------------------------------------------------------- Warning Traceback (most recent call last) /home/lybp/dev/lybp/<ipython console> in <module>() /usr/local/lib/python2.5/site-packages/ipython-0.8.2-py2.5.egg/IPython/Prompts.py in __call__(self, arg) 533 534 # and now call a possibly user-defined print mechanism --> 535 manipulated_val = self.display(arg) 536 537 # user display hooks can change the variable to be stored in /usr/local/lib/python2.5/site-packages/ipython-0.8.2-py2.5.egg/IPython/Prompts.py in _display(self, arg) 559 return IPython.generics.result_display(arg) 560 except TryNext: --> 561 return self.shell.hooks.result_display(arg) 562 563 # Assign the default display method: /usr/local/lib/python2.5/site-packages/ipython-0.8.2-py2.5.egg/IPython/hooks.py in __call__(self, *args, **kw) 132 #print "prio",prio,"cmd",cmd #dbg 133 try: --> 134 ret = cmd(*args, **kw) 135 return ret 136 except ipapi.TryNext, exc: /usr/local/lib/python2.5/site-packages/ipython-0.8.2-py2.5.egg/IPython/hooks.py in result_display(self, arg) 160 161 if self.rc.pprint: --> 162 out = pformat(arg) 163 if '\n' in out: 164 # So that multi-line strings line up with the left column of /usr/local/lib/python2.5/pprint.py in pformat(self, object) 109 def pformat(self, object): 110 sio = _StringIO() --> 111 self._format(object, sio, 0, 0, {}, 0) 112 return sio.getvalue() 113 /usr/local/lib/python2.5/pprint.py in _format(self, object, stream, indent, allowance, context, level) 127 self._readable = False 128 return --> 129 rep = self._repr(object, context, level - 1) 130 typ = _type(object) 131 sepLines = _len(rep) > (self._width - 1 - indent - allowance) /usr/local/lib/python2.5/pprint.py in _repr(self, object, context, level) 193 def _repr(self, object, context, level): 194 repr, readable, recursive = self.format(object, context.copy(), --> 195 self._depth, level) 196 if not readable: 197 self._readable = False /usr/local/lib/python2.5/pprint.py in format(self, object, context, maxlevels, level) 205 and whether the object represents a recursive construct. 206 """ --> 207 return _safe_repr(object, context, maxlevels, level) 208 209 /usr/local/lib/python2.5/pprint.py in _safe_repr(object, context, maxlevels, level) 290 return format % _commajoin(components), readable, recursive 291 --> 292 rep = repr(object) 293 return rep, (rep and not rep.startswith('<')), False 294 /usr/local/lib/python2.5/site-packages/django/db/models/query.py in __repr__(self) 106 107 def __repr__(self): --> 108 return repr(self._get_data()) 109 110 def __len__(self): /usr/local/lib/python2.5/site-packages/django/db/models/query.py in _get_data(self) 484 def _get_data(self): 485 if self._result_cache is None: --> 486 self._result_cache = list(self.iterator()) 487 return self._result_cache 488 /usr/local/lib/python2.5/site-packages/django/db/models/query.py in iterator(self) 187 188 cursor = connection.cursor() --> 189 cursor.execute("SELECT " + (self._distinct and "DISTINCT " or "") + ",".join(select) + sql, params) 190 191 fill_cache = self._select_related /usr/local/lib/python2.5/site-packages/django/db/backends/util.py in execute(self, sql, params) 16 start = time() 17 try: ---> 18 return self.cursor.execute(sql, params) 19 finally: 20 stop = time() /home/lybp/dev/lybp/build/bdist.linux-i686/egg/MySQLdb/cursors.py in execute(self, query, args) /home/lybp/dev/lybp/build/bdist.linux-i686/egg/MySQLdb/cursors.py in _warning_check(self) /usr/local/lib/python2.5/warnings.py in warn(message, category, stacklevel) 60 registry = globals.setdefault("__warningregistry__", {}) 61 warn_explicit(message, category, filename, lineno, module, registry, ---> 62 globals) 63 64 def warn_explicit(message, category, filename, lineno, /usr/local/lib/python2.5/warnings.py in warn_explicit(message, category, filename, lineno, module, registry, module_globals) 100 101 if action == "error": --> 102 raise message 103 # Other actions 104 if action == "once": Warning: Truncated incorrect DOUBLE value: '2008-04-23 14:39:36.133203' In [30]: Keyword.objects.filter(keyword__search=term, keyworddata__updated=datetime.datetime.now(), keyworddata__source="1" ).select_related() Out[31]: [] In [12]: Keyword.objects.filter(keyword=term, keyworddata__updated__gt=datetime.datetime.now(), keyworddata__source="1" ).select_related() Out[13]: []
If I change
keyword__search= to
keyword=, I don't get the error. Also if I use
keyworddata__updated= instead of
keyworddata__updated__gt I don't get the error.
KeywordData is a model with a foreign key pointing to keyword - so I guess it's a reverse lookup.
I'm using MySQL 5.0.45.
Change History (23)
comment:1 Changed 9 years ago by
comment:2 Changed 8 years ago by
Fixed description.
comment:3 Changed 8 years ago by
comment:4 Changed 8 years ago by
comment:5 Changed 8 years ago by
This seems to be trying to report two bugs in the one report, which is a no-no. Let's have one ticket about the
__gt comparison and one about the full text searching, each with a small model (as small as possible) demonstrating the problem.
Note that Erik's comment isn't the source of the problem, since
connection.queries shows the query prior to it being passed to the Python DB wrapper, which is where any quoting takes place.
This needs confirmation and a simpler example and I'm in a triage mode at the moment, so I'm not going to do that right now. If somebody else has time and inclination to reproduce this, it would be appreciated.
comment:6 Changed 8 years ago by
I think it's only one (MySQL) bug, some interaction with fulltext search and greater than (or gte) comparison involving dates. I can recreate in just plain mysql, but if I change the gte to lte or the fulltext search to a like search, the warning goes away.
mysql> select count(Clue) from Clues natural join Puzzles where match (Clue) against ('Marceau') and `Date` >= '2008-06-26'; +-------------+ | count(Clue) | +-------------+ | 1 | +-------------+ 1 row in set, 1 warning (0.00 sec) mysql> show warnings; +---------+------+------------------------------------------------+ | Level | Code | Message | +---------+------+------------------------------------------------+ | Warning | 1292 | Truncated incorrect DOUBLE value: '2008-06-26' | +---------+------+------------------------------------------------+ 1 row in set (0.00 sec) mysql> select count(Clue) from Clues natural join Puzzles where match (Clue) against ('Marceau') and `Date` <= '2008-06-26'; +-------------+ | count(Clue) | +-------------+ | 23 | +-------------+ 1 row in set (0.01 sec) mysql> select count(Clue) from Clues natural join Puzzles where Clue like '%Marceau%' and `Date` >= '2008-06-26'; +-------------+ | count(Clue) | +-------------+ | 1 | +-------------+ 1 row in set (0.00 sec)
That's just bizarre, so I searched for MySQL bugs involving this message and found this:
which looks like a pretty good match. They've verified it's a bug but have no fix yet. They list a workaround to use CAST(date_val AS DATE)...somehow I'm thinking that wouldn't be trivial for the ORM to start doing?
comment:7 Changed 8 years ago by
Impressive debugging (and searching), again, Karen. Thanks. But... good grief! :-(
Since we've also got integers needing to be cast to text for PostgreSQL 8.3 in another ticket, it looks like it's time to crank out some general code to explicitly cast to the right type in a bunch of cases. Maybe it's not too horrible, since we really only convert Python values to SQL lookup values in one place. I'll look into it.
comment:8 Changed 8 years ago by
comment:9 Changed 8 years ago by
I took a look at this. The cast needs to be added on the right-hand-side, not the left as is done by the use of
lookup_cast in [8242]. There doesn't seem to be any general backend hook for adding casting sql to the right-hand-side.
What there is is
datetime_cast_sql (currently only implemented by the Oracle backend) that is called from source:django/trunk/django/db/models/sql/where.py in
make_atom in the case where
value_annot is datetime.datetime:
if value_annot is datetime.datetime: cast_sql = connection.ops.datetime_cast_sql() else: cast_sql = '%s'
Implementing this for mysql as:
def datetime_cast_sql(self): return 'CAST(%s AS DATETIME)'
will actually fix the problem exactly as reported, but not the identical problem that exists for just plain DATE fields, nor the case where the values are passed in as strings not date[time] objects. Keeping the above
datetime_cast_sql, plus defining a new general right-hand-side casting hook and implementing it for mysql like so:
def rhs_cast_sql(self, db_type): if db_type in ['date', 'datetime']: return 'CAST(%%s AS %s)' % db_type return '%s'
plus changing the code in where.py
make_atom to call it:
if value_annot is datetime.datetime: cast_sql = connection.ops.datetime_cast_sql() else cast_sql = connection.ops.rhs_cast_sql(db_type)
works to fix the reported problem for both DATE and DATETIME fields (and covers the case where values are passed as strings, not date[time] objects).
But that puts the mysql fix in two different places and that
if value_annot...else is a bit mysterious looking. I'd think it would be better to have a single general hook that covers both what the existing Oracle
datetime_cast_sql function does and what's needed for mysql for this bug. Unfortunately I have zero knowledge of Oracle so I'm not sure what the value of db_type is for which the current
datetime_cast_sql function is being called. (And value_annot is apparently not useful for datetime.date values -- it's set to 'True' for them? So value_annot can't be used in the general hook case.)
So, in summary I think a fix for this would be to implement a general right-hand-side casting hook for the database backends but doing that correctly requires some Oracle knowledge I don't have. I'll investigate a little more if anyone cares to provides clues for me and thinks this approach is worthwhile.
[Also, I don't know how to write a test for this. The MySQL bug only manifests itself when you've got a full-text index involved in the WHERE clause. So it would have to be a test that ran only when the backend was MySQL with the MyISAM storage engine, that created a full-text index (via custom sql?) and used it in a lookup in combination with a date/datetime gt/gte lookup. Not sure it's worth figuring out how to do all that in a test? Actually I'm not sure this problem is worth fixing in Django since it's really a MySQL bug, but as of today the MySQL bug is still open with no fix in sight other than the casting workaround, so it's still there for Django users to hit.]
comment:10 Changed 8 years ago by
In my patch for [8242], I initially looked into creating a cast function for values, in addition to the field cast function there currently. A couple notes:
- The
db_typevalues are defined per Field type in the backend's creation module (
DATEand
TIMESTAMPfor Oracle).
- I tried to implement a generic value casting method, whilst doing away with
datetime_cast_sql. The problem is then you need at least the value's type to be passed through (as
value_annotdoes now) to the cast method.
- It seems like
value_annotshould be split to avoid it's current dual function of knowing when a value evaluates to True/False, and specifying the value's type for datetimes. However, that tuple that gets passed around is also used by GIS, so it's not completely internal.
So a generic
rhs_cast_sql function for Oracle may look something like this:
def rhs_cast_sql(self, db_type, value_type): if db_type == 'TIMESTAMP' and value_type is datetime.datetime: return "TO_TIMESTAMP(%s, 'YYYY-MM-DD HH24:MI:SS.FF')" return "%s"
comment:11 Changed 8 years ago by
This needs to wait on a proper rhs-cast solution, which won't be in 1.0. So, for anyone reading this far down using 1.0: sorry -- try not to do this kind of lookup, and we'll fix this bug in the next release!
comment:12 Changed 8 years ago by
comment:13 Changed 8 years ago by
You can get around the bug by using .extra(). Instead of
task_set.filter(field__gt=time)
use
task_set.extra(where=['field > DATE(%s)'], params=[time]).
comment:14 Changed 8 years ago by
Milestone post-1.0 deleted
comment:15 Changed 6 years ago by
I have just also experienced this problem, I am using software with below versions:
- Django 1.2.1
- MySQL 5.1.41
- python-mysqldb 1.2.2
Full traceback below:
In [20]: MessageSearch.objects.using('search').all().count() Out[20]: 826 In [21]: MessageSearch.objects.using('search').filter(search_full__search='newsy').filter(created__gt='2010-10-05').count() ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (215, 0)) ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (21, 0)) ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (70, 0)) --------------------------------------------------------------------------- Warning Traceback (most recent call last) /home/bluszcz/projekty/property/rynek_pierwotny/<ipython console> in <module>() /home/bluszcz/projekty/property/rynek_pierwotny/django/db/models/query.pyc in count(self) 324 return len(self._result_cache) 325 --> 326 return self.query.get_count(using=self.db) 327 328 def get(self, *args, **kwargs): /home/bluszcz/projekty/property/rynek_pierwotny/django/db/models/sql/query.pyc in get_count(self, using) 392 393 obj.add_count_column() --> 394 number = obj.get_aggregation(using=using)[None] 395 396 # Apply offset and limit constraints manually, since using LIMIT/OFFSET /home/bluszcz/projekty/property/rynek_pierwotny/django/db/models/sql/query.pyc in get_aggregation(self, using) 364 query.related_select_fields = [] 365 --> 366 result = query.get_compiler(using).execute_sql(SINGLE) 367 if result is None: 368 result = [None for q in query.aggregate_select.items()] /home/bluszcz/projekty/property/rynek_pierwotny/django/db/models/sql/compiler.pyc in execute_sql(self, result_type) 725 726 cursor = self.connection.cursor() --> 727 cursor.execute(sql, params) 728 729 if not result_type: /home/bluszcz/projekty/property/rynek_pierwotny/django/db/backends/util.pyc in execute(self, sql, params) 13 start = time() 14 try: ---> 15 return self.cursor.execute(sql, params) 16 finally: 17 stop = time() /home/bluszcz/projekty/property/rynek_pierwotny/django/db/backends/mysql/base.pyc in execute(self, query, args) 84 def execute(self, query, args=None): 85 try: ---> 86 return self.cursor.execute(query, args) 87 except Database.IntegrityError, e: 88 raise utils.IntegrityError, utils.IntegrityError(*tuple(e)), sys.exc_info()[2] /usr/lib/pymodules/python2.6/MySQLdb/cursors.pyc in execute(self, query, args) 166 self.errorhandler(self, exc, value) 167 self._executed = query --> 168 if not self._defer_warnings: self._warning_check() 169 return r 170 /usr/lib/pymodules/python2.6/MySQLdb/cursors.pyc in _warning_check(self) 80 self.messages.append((self.Warning, w)) 81 for w in warnings: ---> 82 warn(w[-1], self.Warning, 3) 83 elif self._info: 84 self.messages.append((self.Warning, self._info)) Warning: Truncated incorrect DOUBLE value: '2010-10-05 00:00:00' In [22]:
Someone mentioned above, that it could related with MySQL bug:
and looks like that it has been fixed there for versions 5.5+. Anyone maybe checked this?
comment:16 Changed 6 years ago by
comment:17 Changed 5 years ago by
Change UI/UX from NULL to False.
comment:18 Changed 5 years ago by
Change Easy pickings from NULL to False.
comment:19 Changed 4 years ago by
comment:20 Changed 2 years ago by
Working on this in my GSoC project.
comment:21 Changed 2 years ago by
Can someone help me in reproducing this now. Because
__gt, __gte works for both
DateTimeField as well as
DateField. If that's not the only issue here, kindly point me to what I have been missing.
I did:
s = "2014-06-12 11:57" Comment.objects.filter(article__pub_date__gte=s)
comment:22 Changed 2 years ago by
models.py
from django.db import models class Test(models.Model): name = models.CharField(max_length=40) created = models.DateField() created2 = models.DateTimeField()
shell:
In [1]: from t19508.models import * In [2]: import datetime In [3]: d = datetime.datetime.now() In [4]: d2 = d.date() In [5]: t=Test.objects.using('mysql').create(name='anubhav', created=d.date(), created2=d) In [6]: t=Test.objects.using('mysql').filter(created2__gte="2008-04-23 14:39:36.133203") In [7]: t[0].created Out[7]: datetime.date(2014, 7, 28) In [8]: t[0].created2 Out[8]: datetime.datetime(2014, 7, 28, 23, 36, 13, tzinfo=<UTC>) In [9]: t=Test.objects.using('mysql').filter(name__search="anubhav").filter(created2__gte="2008-04-23 14:39:36.133203") In [10]: t[0].created Out[10]: datetime.date(2014, 7, 28) In [11]: t[0].created2 Out[11]: datetime.datetime(2014, 7, 28, 23, 36, 13, tzinfo=<UTC>)
I think that the problem now doesn't exists.
It looks to me like the date isn't being quoted - or at least that's what connection.queries says... | https://code.djangoproject.com/ticket/7074 | CC-MAIN-2016-44 | refinedweb | 2,878 | 50.73 |
iThingState Struct ReferenceThis is the state interface to access the internals of a thing mesh object.
More...
[Mesh plugins]
#include <imesh/thing.h>
Detailed DescriptionThis is the state interface to access the internals of a thing mesh object.
Main creators of instances implementing this interface:
- Thing mesh object plugin (crystalspace.mesh.object.thing)
- iMeshObjectFactory::NewInstance()
Main ways to get pointers to this interface:
Main users of this interface:
- Thing Loader plugin (crystalspace.mesh.loader.thing)
Definition at line 694 of file thing.h.
Member Function Documentation
Clear all replaced materials (i.e.
reset to default materials from factory).
Create a polygon handle that can be used to refer to some polygon.
This can be useful in situations where an SCF handle is required to be able to reference a polygon. The thing will not keep a reference to this handle so you are fully responsible for it after calling this function.
- Parameters:
-
Get mix mode.
Get the moving option.
Get the lightmap for a specific polygon.
Query for pseudo-static lightmaps.
Get world space plane of the specified polygon.
- Parameters:
-
Return the material
oldMat was replaced with (or 0 if it wasn't).
Get the given vertex coordinates in world space.
Get the vertex coordinates in world space.
Prepare the thing to be ready for use.
Normally this doesn't have to be called as the engine will call this function automatically as soon as the object is rendered. However, to avoid the (sometimes long) setup time for an object while walking around an application can choose to call this function manually in order to increase load time but decrease the time need to setup things later.
Scan all polygons and replace the given material with a new material.
Note that the new material MUST have the same size as the old material! If 'newmat' == 0 then the default from the factory will be used again. Note that 'oldmat' will always be compared from the factory and not from the current material the polygon has!
Set mix mode.
Control how this thing will be moved.
There are currently two options.
- CS_THING_MOVE_NEVER: this option is set for a thing that cannot move at all. In this case the movable will be ignored and only hard transforms can be used to move a thing with this flag. This setting is both efficient for memory (object space coordinates are equal to world space coordinates so only one array is kept) and render speed (only the camera transform is needed). This option is very useful for static geometry like walls. This option is default.
- CS_THING_MOVE_OCCASIONAL: this option is set for a thing that is movable but doesn't move all the time usually. Setting this option means that the world space vertices will be cached (taking up more memory that way) but the coordinates will be recalculated only at rendertime (and cached at that time). This option has the same speed efficiency as MOVE_NEVER when the object doesn't move but more memory is used as all the vertices are duplicated. Use this option for geometry that is not too big (in number of vertices) and only moves occasionally like doors of elevators.
Note: it is no longer needed to manually set this option. By default things will use CS_THING_MOVE_NEVER and they will automatically switch to the slightly less efficient CS_THING_MOVE_OCCASIONAL if needed.
Reset the prepare flag so that this Thing can be re-prepared.
Among other things this will allow cached lightmaps to be recalculated.
The documentation for this struct was generated from the following file:
Generated for Crystal Space 1.2.1 by doxygen 1.5.3 | http://www.crystalspace3d.org/docs/online/api-1.2/structiThingState.html | CC-MAIN-2016-50 | refinedweb | 605 | 66.44 |
Inspired by excellent CobaltStrike training, I set out to work out an easy way to inject into processes in Linux. There’s been quite a lot of experimentation with this already, usually using
ptrace(2) or
LD_PRELOAD, but I wanted something a little simpler and less error-prone, perhaps trading ease-of-use for flexibility and works-everywhere. Enter GDB and shared object files (i.e. libraries).
GDB, for those who’ve never found themselves with a bug unsolvable with lots of well-placed
printf("Here\n") statements, is the GNU debugger. It’s typical use is to poke at a runnnig process for debugging, but it has one interesting feature: it can have the debugged process call library functions. There are two functions which we can use to load a library into to the program:
dlopen(3)from libdl, and
__libc_dlopen_mode, libc’s implementation. We’ll use
__libc_dlopen_mode because it doesn’t require the host process to have libdl linked in.
In principle, we could load our library and have GDB call one of its functions. Easier than that is to have the library’s constructor function do whatever we would have done manually in another thread, to keep the amount of time the process is stopped to a minimum. More below.
Caveats
Trading flexibility for ease-of-use puts a few restrictions on where and how we can inject our own code. In practice, this isn’t a problem, but there are a few gotchas to consider.
ptrace(2)
We’ll need to be able to attach to the process with
ptrace(2), which GDB uses under the hood. Root can usually do this, but as a user, we can only attach to our own processes. To make it harder, some systems only allow processes to attach to their children, which can be changed via a sysctl. Changing the sysctl requires root, so it’s not very useful in practice. Just in case:
sysctl kernel.yama.ptrace_scope=0 # or echo 0 > /proc/sys/kernel/yama/ptrace_scope
Generally, it’s better to do this as root.
Stopped Processes
When GDB attaches to a process, the process is stopped. It’s best to script GDB’s actions beforehand, either with
-x and
--batch or
echoing commands to GDB minimize the amount of time the process isn’t doing whatever it should be doing. If, for whatever reason, GDB doesn’t restart the process when it exits, sending the process
SIGCONT should do the trick.
kill -CONT <PID>
Process Death
Once our library’s loaded and running, anything that goes wrong with it (e.g. segfaults) affects the entire process. Likewise, if it writes output or sends messages to syslog, they’ll show up as coming from the process. It’s not a bad idea to use the injected library as a loader to spawn actual malware in new proceses.
On Target
With all of that in mind, let’s look at how to do it. We’ll assume ssh access to a target, though in principle this can (should) all be scripted and can be run with shell/sql/file injection or whatever other method.
Process Selection
First step is to find a process into which to inject. Let’s look at a process listing, less kernel threads:
root@ubuntu-s-1vcpu-1gb-nyc1-01:~# ps -fxo pid,user,args | egrep -v ' \[\S+\]$' PID USER COMMAND 1 root /sbin/init 625 root /lib/systemd/systemd-journald 664 root /sbin/lvmetad -f 696 root /lib/systemd/systemd-udevd 1266 root /sbin/iscsid 1267 root /sbin/iscsid 1273 root /usr/lib/accountsservice/accounts-daemon 1278 root /usr/sbin/sshd -D 1447 root \_ sshd: root@pts/1 1520 root \_ -bash 1538 root \_ ps -fxo pid,user,args 1539 root \_ grep -E --color=auto -v \[\S+\]$ 1282 root /lib/systemd/systemd-logind 1295 root /usr/bin/lxcfs /var/lib/lxcfs/ 1298 root /usr/sbin/acpid 1312 root /usr/sbin/cron -f 1316 root /usr/lib/snapd/snapd 1356 root /sbin/mdadm --monitor --pid-file /run/mdadm/monitor.pid --daemonise --scan --syslog 1358 root /usr/lib/policykit-1/polkitd --no-debug 1413 root /sbin/agetty --keep-baud 115200 38400 9600 ttyS0 vt220 1415 root /sbin/agetty --noclear tty1 linux 1449 root /lib/systemd/systemd --user 1451 root \_ (sd-pam)
Some good choices in there. Ideally we’ll use a long-running process which nobody’s going to want to kill. Processes with low pids tend to work nicely, as they’re started early and nobody wants to find out what happens when they die. It’s helpful to inject into something running as root to avoid having to worry about permissions. Even better is a process that nobody wants to kill but which isn’t doing anything useful anyway.
In some cases, something short-lived, killable, and running as a user is good if the injected code only needs to run for a short time (e.g. something to survey the box, grab creds, and leave) or if there’s a good chance it’ll need to be stopped the hard way. It’s a judgement call.
We’ll use
664 root /sbin/lvmetad -f. It should be able to do anything we’d like and if something goes wrong we can restart it, probably without too much fuss.
Malware
More or less any linux shared object file can be injected. We’ll make a small one for demonstration purposes, but I’ve injected multi-megabyte backdoors written in Go as well. A lot of the fiddling that went into making this blog post was done using pcapknock.
For the sake of simplicity, we’ll use the following. Note that a lot of error handling has been elided for brevity. In practice, getting meaningful error output from injected libraries’ constructor functions isn’t as straightforward as a simple
warn("something"); return; unless you really trust the standard error of your victim process.
#include <pthread.h> #include <stdlib.h> #include <unistd.h> #define SLEEP 120 /* Time to sleep between callbacks */ #define CBADDR "<REDACTED>" /* Callback address */ #define CBPORT "4444" /* Callback port */ /* Reverse shell command */ #define CMD "echo 'exec >&/dev/tcp/"\ CBADDR "/" CBPORT "; exec 0>&1' | /bin/bash" void *callback(void *a); __attribute__((constructor)) /* Run this function on library load */ void start_callbacks(){ pthread_t tid; pthread_attr_t attr; /* Start thread detached */ if (-1 == pthread_attr_init(&attr)) { return; } if (-1 == pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED)) { return; } /* Spawn a thread to do the real work */ pthread_create(&tid, &attr, callback, NULL); } /* callback tries to spawn a reverse shell every so often. */ void * callback(void *a) { for (;;) { /* Try to spawn a reverse shell */ system(CMD); /* Wait until next shell */ sleep(SLEEP); } return NULL; }
In a nutshell, this will spawn an unencrypted, unauthenticated reverse shell to a hardcoded address and port every couple of minutes. The
__attribute__((constructor)) applied to
start_callbacks() causes it to run when the library is loaded. All
start_callbacks() does is spawn a thread to make reverse shells.
Building a library is similar to building any C program, except that
-fPIC and
-shared must be given to the compiler.
cc -O2 -fPIC -o libcallback.so ./callback.c -lpthread -shared
It’s not a bad idea to optimize the output with
-O2 to maybe consume less CPU time. Of course, on a real engagement the injected library will be significantly more complex than this example.
Injection
Now that we have the injectable library created, we can do the deed. First thing to do is start a listener to catch the callbacks:
nc -nvl 4444 #OpenBSD netcat ftw!
__libc_dlopen_mode takes two arguments, the path to the library and flags as an integer. The path to the library will be visible, so it’s best to put it somewhere inconspicuous, like
/usr/lib. We’ll use
2 for the flags, which corresponds to
dlopen(3)’s
RTLD_NOW. To get GDB to cause the process to run the function, we’ll use GDB’s
quitcommand.
root@ubuntu-s-1vcpu-1gb-nyc1-01:~# echo 'print __libc_dlopen_mode("/root/libcallback.so", 2)' | gdb -p 664 GNU gdb (Ubuntu 7.11.1-0ubuntu1~16.5) 7.11.1 Copyright (C) 2016 Free Software Foundation, Inc. ...snip... 0x00007f6ca1cf75d3 in select () at ../sysdeps/unix/syscall-template.S:84 84 ../sysdeps/unix/syscall-template.S: No such file or directory. (gdb) [New Thread 0x7f6c9bfff700 (LWP 1590)] $1 = 312536496 (gdb) quit A debugging session is active. Inferior 1 [process 664] will be detached. Quit anyway? (y or n) [answered Y; input not from terminal] Detaching from program: /sbin/lvmetad, process 664
Checking netcat, we’ve caught the callback:
[stuart@c2server:/home/stuart] $ nc -nvl 4444 Connection from <REDACTED> 50184 received! ps -fxo pid,user,args ...snip... 664 root /sbin/lvmetad -f 1591 root \_ sh -c echo 'exec >&/dev/tcp/<REDACTED>/4444; exec 0>&1' | /bin/bash 1593 root \_ /bin/bash 1620 root \_ ps -fxo pid,user,args ...snip...
That’s it, we’ve got execution in another process.
If the injection had failed, we’d have seen
$1 = 0, indicating
__libc_dlopen_mode returned
NULL.
Artifacts
There are several places defenders might catch us. The risk of detection can be minimized to a certain extent, but without a rootkit, there’s always some way to see we’ve done something. Of course, the best way to hide is to not raise suspicions in the first place.
Process listing
A process listing like the one above will show that the process into which we’ve injected malware has funny child processes. This can be avoided by either having the library doule-fork a child process to do the actual work or having the injected library do everything from within the victim process.
Files on disk
The loaded library has to start on disk, which leaves disk artifacts, and the original path to the library is visible in
/proc/pid/maps:
root@ubuntu-s-1vcpu-1gb-nyc1-01:~# cat /proc/664/maps ...snip... 7f6ca0650000-7f6ca0651000 r-xp 00000000 fd:01 61077 /root/libcallback.so 7f6ca0651000-7f6ca0850000 ---p 00001000 fd:01 61077 /root/libcallback.so 7f6ca0850000-7f6ca0851000 r--p 00000000 fd:01 61077 /root/libcallback.so 7f6ca0851000-7f6ca0852000 rw-p 00001000 fd:01 61077 /root/libcallback.so ...snip...
If we delete the library,
(deleted) is appended to the filename (i.e.
/root/libcallback.so (deleted)), which looks even weirder. This is somewhat mitigated by putting the library somewhere libraries normally live, like
/usr/lib, and naming it something normal-looking.
Service disruption
Loading the library stops the running process for a short amount of time, and if the library causes process instability, it may crash the process or at least cause it to log warning messages (on a related note, don’t inject into
systemd(1), it causes segfaults and makes
shutdown(8) hang the box).
Process injection on Linux is reasonably easy:
- Write a library (shared object file) with a constructor.
- Load it with
echo 'print __libc_dlopen_mode("/path/to/library.so", 2)' | gdb -p <PID> | https://movaxbx.ru/2018/04/02/process-injection-with-gdb/ | CC-MAIN-2019-26 | refinedweb | 1,822 | 61.87 |
"Richard W.M. Jones" <rjones redhat com> wrote: > With the attached patch you can get all the way through a compile of > libvirt using the MinGW cross-compiler. ... > Index: configure.in > =================================================================== ... > -AC_CHECK_FUNCS([cfmakeraw regexec uname sched_getaffinity]) > +AC_CHECK_FUNCS([cfmakeraw regexec uname sched_getaffinity ntohl htonl ntohs htons]) Hi Rich, That looks fine. Two suggestions and a question: If you add those on a separate line, not only will these related checks be all by themselves (less risk of eventual conflict, however small), you'll also avoid going over the 80-col line-length limit ;-) AC_CHECK_FUNCS([ntohl htonl ntohs htons]) ... > Index: include/byteswap.h > =================================================================== ... > +#ifndef _PORTABLEXDR_BYTESWAP_H > +#define _PORTABLEXDR_BYTESWAP_H 1 A different name file name might be nice, so this file is not confused (by people) with the system-provided <byteswap.h>. Maybe byteswap-pxdr.h or something similar. ... > +#if BYTE_ORDER == BIG_ENDIAN > + return x; > +#elif BYTE_ORDER == LITTLE_ENDIAN > + return __bswap_32 (x); > +#else > +# error "What kind of system is this?" > +#endif Where is BYTE_ORDER defined? normally in endian.h. More curiosity than anything, since I'm sure it works everywhere you built it. I sort of expected to see an "#include <endian.h>" somewhere. | https://www.redhat.com/archives/libvir-list/2008-July/msg00108.html | CC-MAIN-2016-40 | refinedweb | 188 | 54.22 |
wcstod - convert a wide-character string to a double-precision number
#include <wchar.h> double wcstod(const wchar_t *nptr, wchar_t **endptr);
The wcstod() function converts the initial portion of the wide-character string pointed to by nptr to double representation. First it decomposes the input wide-character string into three parts: an initial, possibly empty, sequence of white-space wide-character codes (as specified by iswspace()); a subject sequence interpreted as a floating-point constant; and a final wide-character string of one or more unrecognised wide-character codes, including the terminating null wide-character code of the input wide-character string. Then it attempts to convert the subject sequence to a floating-point number, and returns the result.
The expected form of the subject sequence is an optional + or - sign, then a non-empty sequence of digits optionally containing a radix, then an optional exponent part. An exponent part consists of e or E, followed by an optional sign, followed by one or more decimal digits. codes, or if the first wide-character code that is not white space other than a sign, a digit or a radix.
If the subject sequence has the expected form, the sequence of wide-character codes starting with the first digit or the radix (whichever occurs first) is interpreted as a floating constant as defined in the C language, except that the radix is used in place of a period, and that if neither an exponent part nor a radix appears, a radix is assumed to follow the last digit in the wide-character string. If the subject sequence begins with a minus sign, the value resulting from the conversion is negated. A pointer to the final wide-character string is stored in the object pointed to by endptr, provided that endptr is not a null pointer.
The radix is defined in the program's locale (category LC_NUMERIC). In the POSIX locale, or in a locale where the radix is not defined, the radix defaults to a period (.).
In other than the POSIX locale, other wcstod() function will not change the setting of errno if successful.
Because 0 is returned on error and is also a valid return on success, an application wishing to check for error situations should set errno to 0, then call wcstod(), then check errno.
The wcstod() function returns the converted value, if any. underflow, 0 is returned and errno is set to [ERANGE] .
The wcstod() function will fail if:
- [ERANGE]
- The value to be returned would cause overflow or underflow.
The wcstod() function may fail if:
- [EINVAL]
- No conversion could be performed.
None.
None.
None.
iswspace(), localeconv(), scanf(), setlocale(), wcstol(), <wchar.h>, the XBD specification, Locale .
Derived from the MSE working draft. | http://pubs.opengroup.org/onlinepubs/007908799/xsh/wcstod.html | CC-MAIN-2019-22 | refinedweb | 454 | 50.57 |
Here is a complete simple working example
import multiprocessing as mp
import time
import random
class Foo:
def __init__(self):
# some expensive set up function in the real code
self.x = 2
print('initializing')
def run(self, y):
time.sleep(random.random() / 10.)
return self.x + y
def f(y):
foo = Foo()
return foo.run(y)
def main():
pool = mp.Pool(4)
for result in pool.map(f, range(10)):
print(result)
pool.close()
pool.join()
if __name__ == '__main__':
main()
The intended way to deal with things like this is via the optional
initializer and
initargs arguments to the
Pool() constructor. They exist precisely to give you a way to do stuff exactly once when a worker process is created. So, e.g., add:
def init(): global foo foo = Foo()
and change the
Pool creation to:
pool = mp.Pool(4, initializer=init)
If you needed to pass arguments to your per-process initialization function, then you'd also add an appropriate
initargs=... argument.
Note: of course you should also remove the
foo = Foo()
line from
f(), so that your function uses the global
foo created by
init(). | https://codedump.io/share/1eptCTGTknlA/1/python-multiprocessing-pool-with-expensive-initialization | CC-MAIN-2016-44 | refinedweb | 187 | 65.01 |
Quick start
Here’s a quick example of how to use HOM to accomplish a simple task in Houdini. Don’t worry if you don’t understand the details of this example – it will give you a favor of what scripting Houdini is like.
Choose Windows > Python Shell to open an interactive Python Shell window.
# Print out a tree of all the nodes in the scene: >>> def print_tree(node, indent=0): ... for child in node.children(): ... print " " * indent + child.name() ... print_tree(child, indent + 3) ... # Press Enter to finish the definition >>> print_tree(hou.node('/')) obj cam1 file1 properties standard out part ch shop img img1 vex
Getting Started
When you open Houdini’s Python shell, you’ll notice it greets you with
the
>>> prompt and waits for you to enter Python expressions or
statements. Even if you don’t plan on writing large Python scripts, the
Python shell is invaluable as a handy calculator:
>>> 2 + 2 4 >>> 0.03 * 25.1 0.753 >>> min(hou.frame(), 7) * 3 3.0
What is hou.frame(), you might ask? Houdini’s Python API is implemented in
a module named
hou, short for Houdini. Just like
os.getcwd is a function
in the
os module,
hou.frame is a function in the
hou module, and it
returns the current frame number. Note that you don’t need to write
import
hou to use the hou module, since Houdini automatically imports the hou module
when it starts up.
Press Ctrl+D to close a floating Python shell window. See the main menu for the shortcut to open the floating Python shell window.
Python shells can be inside panes if you don’t want to use a floating window.
In the Python shell, Home and Ctrl+A will move to the beginning of the line, End and Ctrl+E will move to the end, and up and down will navigate through the history.
You can’t use Ctrl+C to copy from the Python shell, since Ctrl+C will send a KeyboardInterrupt exception. To copy text from a Python shell, right-click and select Copy.
Use the hou.hipFile submodule to save/load the current session to/from hip files. Note that hou.hipFile.load will throw a hou.LoadWarning exception if there were warnings, even though the file was loaded successfully. The following code will print out warnings and continue the rest of the script.
# Print out load warnings, but continue on a successful load. try: hou.hipFile.load("myfile.hip") except hou.LoadWarning, e: print e
Accessing Nodes
Because Houdini is designed around nodes (e.g. SOPs, DOPs, Object nodes, etc.), you're likely to manipulate them in scripts. Here’s a brief primer to get started.
The hou.node function takes a path to a node and returns a hou.Node object, or None if the path is invalid.
# Empty out the current session. >>> hou.hipFile.clear() >>> hou.node('/obj') <hou.Node at /obj> >>> # hou.node returned a hou.Node object corresponding to the /obj node >>> n = hou.node('/asdfasdf') >>> # The node path was invalid, so n will be the None object. >>> print n None >>> g = hou.node('/obj').createNode('geo') >>> g <hou.ObjNode of type geo at /obj/geo1> >>> # g is hou.Node object corresponding to the newly created /obj/geo1 node. >>> # Note that g is actually a hou.ObjNode instance, which is a subclass of >>> # hou.Node. >>> # The parm method on hou.Node objects returns a hou.Parm object (or None >>> # if the parameter name is invalid). >>> tx = g.parm('tx') >>> tx <hou.Parm tx in /obj/geo1> >>> # Evaluate the parameter and change its value. >>> tx.eval() 0.0 >>> tx.set(3.5) >>> tx.eval() 3.5 >>> hou.node('/obj/geo1').parm('tx').eval() 3.5 >>> # hou.parm is a shortcut to access a parm directly. >>> hou.parm('/obj/geo1/tx').eval() 3.5 >>> # hou.evalParm is a shortcut to evaluate a parameter. >>> hou.evalParm('/obj/geo1/tx') 3.5 >>> # hou.ch is exactly the same as hou.evalParm. >>> hou.ch('/obj/geo1/tx') 3.5 >>> # hou.Parm.name returns the name of the parameter, and hou.Node.parms >>> # Returns a tuple of all the Node's parameters. >>> [p.name() for p in g.parms()] ['stdswitcher1', 'stdswitcher2', 'stdswitcher3', 'stdswitcher4', 'keeppos', 'pre_xform', 'xOrd', 'rOrd', 'tx', 'ty', 'tz', 'rx', 'ry', 'rz', 'sx', 'sy', 'sz', 'px', 'py', 'pz', 'scale', 'lookatpath', 'lookup', 'pathobjpath', 'roll', 'pos', 'uparmtype', 'pathorient', 'upx', 'upy', 'upz', 'bank', 'shop_materialpath', 'shop_materialopts', 'tdisplay', 'display', 'use_dcolor', 'dcolorr', 'dcolorg', 'dcolorb', 'picking', 'pickscript', 'caching', 'vport_shadeopen', 'vport_displayassubdiv', 'vm_phantom', 'vm_renderable', 'folder01', 'folder02', 'folder03', 'folder04', 'categories', 'reflectmask', 'lightmask', 'geo_velocityblur', 'vm_shadingquality', 'vm_rayshadingquality', 'vm_rmbackface', 'shop_geometrypath', 'vm_rendersubd', 'vm_renderpoints', 'vm_metavolume', 'vm_coving', 'vm_computeN'] >>> # hou.Parm tuples correspond to parameter groupings: >>> t = g.parmTuple('t') >>> t <hou.ParmTuple t in /obj/geo1> >>> tuple(t) (<hou.Parm tx in /obj/geo1>, <hou.Parm ty in /obj/geo1>, <hou.Parm tz in /obj/geo1>) >>> t.eval() (3.5, 0.0, 0.0) >>> t.set((1.0, 2.0, 3.0)) >>> t.eval() (1.0, 2.0, 3.0) >>> # Build a simple sop network. >>> hou.hipFile.clear() >>> geo = hou.node('/obj').createNode('geo') >>> box = geo.createNode('box') >>> subd = geo.createNode('subdivide') >>> subd.parm('iterations').set(3) >>> subd.setFirstInput(box) >>> subd.moveToGoodPosition() # Move the node tiles to avoid overlaps. >>> subd.setDisplayFlag(True) >>> subd.setRenderFlag(True) >>> subd.setCurrent(True, clear_all_selected=True)
Working with Animated Parameters and Keyframes
When you hear the term “animated parameter”, you typically think of keyframed
values and bezier curves and the animation graph editor. Recall from earlier,
though, that parameters with expressions are also considered animated
parameters. All animated parameters have at least one keyframe, and each
keyframe has an expression. Typical parameters with expressions simply have
one keyframe whose expression is something like
sin($F) or
cos(time()),
while typical animation curves have multiple keyframes whose expressions are
something like
bezier().
So how does a function like
bezier() evaluate to different values at
different times? Clearly there are no parameters passed to bezier that
vary from time to time, and there are no keyframe or slope values passed
in. The answer is that keyframes store more than just an expression.
A keyframe stores those values, slopes, and accelerations, and certain
functions, like bezier, access those values for the current keyframe and
the next one. For keyframes with expressions like
sin($F), those
extra values are not set and are not used.
Each keyframe has an associated time. Using that time and the number of frames per second, you can derive the keyframe’s frame. You can think of the expression as being active between keyframes: Houdini evaluates the expression between its keyframe and the next keyframe. If there is no next keyframe, most animation functions (e.g. bezier, cubic, etc.) simply evaluate to their keyframe’s value. For the times before the first keyframe, the parameter evaluates to the value at the first keyframe’s time.
hou.Parm.keyframes values, slopes, and accelerations
If you set the in value and the (out) value is not set, it will be set to the same value. Setting the in value breaks the tie between the values. If neither of the in or (out) values are set, they are considered tied.
for example, to set a keyframe with the current value and slope, do not set the value or slope in the keyframe
or, to automatically determine the slopes, set a keyframe with the slope not set
times and expressions
in and out/values
tied values
asCode()
same syntax between Hscript expressions and Python
Working with Objects and Transformations
worldTransform(), setWorldTransform()
matrices, exploding
column vectors for transforms (p T1 T2), not (T2 T1 p)
see the object_xform cookbook example
Tips
Drag a node from the network editor into the Python shell to paste a hou.node expression. You may find this easier if the Python shell is inside a pane.
Use variables to store hou.Node, hou.Parm, and hou.ParmTuple objects instead of calling hou.node and hou.parm over and over again.
Use the output from hou.Node.asCode to help learn the parts of the HOM API that create nodes and set parameters and keyframes. | http://www.sidefx.com/docs/houdini12.0/hom/intro | CC-MAIN-2013-48 | refinedweb | 1,349 | 68.06 |
Key Takeaways
- Languages that run in the browser should precompile to JavaScript isomorphically for compactness, execution speed and development speed
- For efficient cooperation on a large web application, module boundaries should coincide with team boundaries
- Modules may have dynamic typing on the inside, but should have static typing on the outside
- Having the same technology on the client and on the server promotes scalability
- The future of Python in the browser is tied to the future of Python in general, not so much to a particular implementation
Featuring a diversity of programming languages, backend technology offers the right tool for any kind of job. At the frontend, however, it's one size fits all: JavaScript. Someone with only a hammer will have to treat anything like a nail. One attempt to break open this restricted world is represented by the growing set of source to source compilers that target JavaScript. Such compilers are available for languages as diverse as Scala, C++, Ruby, and Python. The Transcrypt Python to JavaScript compiler is a relatively new open source project, aiming at executing Python 3.6 at JavaScript speed, with comparable file sizes.
For a tool like this to offer an attractive alternative to everyday web development in JavaScript, at least the following three demands have to be met:
- From a user point of view, web sites and web applications created with it should be indistinguishable with regard to look and feel, page load time, page startup time and sustained speed
- From a developer point of view, it should allow seamless access to any JavaScript library, efficient debugging and the opportunity to capitalize on existing skills
- From a business point of view, it should offer continuity, availability of a large pool of professionally trained developers, a good ratio of created functionality to invested hours and a resulting application open to changing needs
To be successful, all aspects of these three requirements have to be met. Different compilers strike a different balance between them, but no viable compiler for every day production use can neglect any of them. For Transcrypt, each of the above three points has led to certain design decisions.
Demand 1:
Look and feel of web sites and web applications are directly connected to the underlying JavaScript libraries used, so to have exactly the same look and feel, a site or application should use exactly the same libraries.
Although fast connections may hide the differences, achieving the same page load time, even on mobile devices running on public networks, mandates having roughly the same code size. This rules out downloading a compiler, virtual machine or large runtime at each new page load.
Achieving the same startup time as pages utilizing native JavaScript is only possible if the code is statically precompiled to JavaScript on the server. The larger the amount of code needed for a certain page, the more obvious the difference becomes.
To have the same sustained speed, the generated JavaScript must be efficient. Since JavaScript virtual machines are highly optimized for common coding patterns, the generated JavaScript should be similar to handwritten JavaScript, rather than emulating a stack machine or any other low level abstraction.
Demand 2:
To allow seamless access to any JavaScript library, Python and JavaScript have to use unified data formats, a unified calling model, and a unified object model. The latter requires the JavaScript prototype based single inheritance mechanism to somehow gel with Python’s class based multiple inheritance. Note that the recent addition of the keyword 'class' to JavaScript has no impact on the need to bridge this fundamental difference.
To enable efficient debugging, things like setting breakpoints and single stepping through code have to be done on the source level. In other words: source maps are necessary. Whenever a problem is encountered it must be possible to inspect and comprehend the generated JavaScript to pinpoint exactly what's going on. To this end, the generated JavaScript should be isomorphic to the Python source code.
The ability to capitalize on existing skills means that the source code has to be pure Python, not some syntactic variation. A robust way to achieve this is to use Python's native parser. The same holds for semantics, a requirement that poses practical problems and requires introduction of compiler directives to maintain runtime efficiency.
Demand 3:
Continuity is needed to protect investments in client side Python code, requiring continued availability of client side Python compilers with both good conformance and good performance. Striking the right balance between these two is the most critical part of designing a compiler.
Continued availability of trained Python developers is sufficiently warranted by the fact that Python has been the number 1 language taught in introductory computer science courses for three consecutive years now. On the backend it is used for every conceivable branch of computing. All these developers, used to designing large, long lived systems rather than insulated, short lived pieces of frontend script code, become available to browser programming if it is done in Python.
With regard to productivity, many developers that have made the switch from a different programming language to Python agree that it has significantly increased their output while retaining runtime performance. The latter is due to the fact that libraries used by Python applications for time critical operations like numerical processing and 3D graphics usually compile to native machine code.
The last point – openness to changed needs – means that modularity and flexibility have to be supported at every level. The presence, right from the start, of class-based OO with multiple inheritance and a sophisticated module and package mechanism has contributed to this. In addition, the possibility to use named and default parameters allows developers to change call signatures in a late stage without breaking existing code.
Conformance versus performance: language convergence to the rescue
Many Python constructs closely match JavaScript constructs, especially when translating to newer versions of JavaScript. There's a clear convergence between both languages. Specifically, more and more elements of Python make their way into JavaScript: for ... of ..., classes (in a limited form), modules, destructuring assignment and argument spreading. Since constructs like for ... of ... are highly optimized on modern JavaScript virtual machines, it's advantageous to translate such Python constructs to closely matching JavaScript constructs. Such isomorphic translation will result in code that can benefit from optimizations in the target language. It will also result in JavaScript code that is easy to read and debug.
Although with Transcrypt, through the presence of source maps, most debugging will take place stepping through Python rather than JavaScript code, a tool should not conceal but rather reveal the underlying technology, granting developer full access to 'what's actually going on'. This is even more desirable since native JavaScript code can be inserted at any point in the Python source, using a compiler directive.
The isomorphism between Python and the JavaScript code generated by Transcrypt is illustrated by the following fragment using multiple inheritance.
class C (A, B): def __init__ (self, x, y): A.__init__ (self, x) B.__init__ (self, y) def show (self, label): A.show (self, label) B.show (self, label)
translates to:
var C = __class__ ('C', [A, B], { get __init__ () {return __get__ (this, function (self, x, y) { A.__init__ (self, x); B.__init__ (self, y); });}, get show () {return __get__ (this, function (self, label) { A.show (self, label); B.show (self, label); });} });
Striving for isomorphic translation has limitations, rooted in subtle, but sometimes hard to overcome differences between the two languages. Whereas Python allows lists to be concatenated with the + operator, isomorphic use of this operator in JavaScript result in both lists being converted to strings and then glued together. Of course a + b could be translated to __add__ (a, b), but since the type of a and b is determined at runtime, this would result in a function call and dynamic type inspection code being generated for something as simple as 1 + 1, resulting in bad performance for computations in inner loops. Another example is Python's interpretation of 'truthyness'. The boolean value of an empty list is True (or rather: true) in JavaScript and False in Python. Dealing with this globally in an application would require every if-statement to feature a conversion, since in the Python construct if a: it cannot be predicted whether a holds a boolean or somthing else like a list So if a: would have to be translated to if( __istrue__ (a)), again resulting in slow performance if used in inner loops.
In Transcrypt, compiler directives embedded in the code (pragmas) are used control compilation of such constructs locally. This enables writing matrix computations using standard mathematics notation like M4 = (M1 + M2) * M3, at the same time not generating any overhead for something like perimeter = 2 * pi * radius. Syntactically, pragma's are just calls to the __pragma__ function, executed compile time rather than run time. Importing a stub module containing def __pragma__ (directive, parameters): pass allows this code to run on CPython as well, without modification. Alternatively, pragmas can be placed in comments.
Unifying the type system while avoiding name clashes
Another fundamental design choice for Transcrypt was to unify the Python and the JavaScript type system, rather than have them live next to each other, converting between them on the fly. Data conversion costs time and increases target code size as well as memory use. It burdens the garbage collector and makes interaction between Python code and JavaScript libraries cumbersome.
So the decision was made to embrace the JavaScript world, rather than to create a parallel universe. A simple example of this is the following code using the Plotly.js library:
__pragma__ ('jskeys') # For convenience, allow JS style unquoted string literals as dictionary keys import random import math import itertools xValues = [2 * math.pi * step / 200 for step in range (201)] yValuesList = [ [math.sin (xValue) + 0.5 * math.sin (xValue * 3 + 0.25 * math.sin (xValue * 5)) for xValue in xValues], [1 if xValue <= math.pi else -1 for xValue in xValues] ] kind = 'linear' Plotly.plot ( kind, [ { x: xValues, y: yValues } for yValues in yValuesList ], { title: kind, xaxis: {title: 'U (t) [V]'}, yaxis: {title: 't [s]'} } )
Apart from the pragma allowing to leave out the quotes from dictionary keys, which is optional and only used for convenience, the code looks a lot like comparable JavaScript code. Note the (optional) use of list comprehensions, a facility JavaScript still lacks. The fact that Python dictionary literals are mapped to JavaScript object literals is of no concern to the developer; they can use the Plotly JavaScript documentation while writing Python code. No conversion is done behind the scenes. A Transcrypt dict IS a JavaScript object, in all cases.
In unifying the type systems, name clashes occur. Python and JavaScript strings both have a split (), but their semantics have important differences. There are many cases of such clashes and, since both Python and JavaScript are evolving, future clashes are to be expected.
To deal with these, Transcrypt supports the notion of aliases. Whenever in Python <string>.split is used, this is translated to <string>.py_split, a JavaScript function having Python split semantics. In native JavaScript code split will refer to the native JavaScript split function as it should. However, the JavaScript native split method can also be called from Python, where it is called js_split. While methods like these predefined aliases are available in Transcrypt, the developer can define new aliases and undefine existing ones. In this way any name clashes resulting from the unified type system can be resolved without run time penalty, since aliases do their work compile time.
Aliases also allow generation of any JavaScript identifier from a Python identifier. An example is the $ character, that is allowed as part of a name in JavaScript but forbidden in Python. Transcrypt strictly conforms Python syntax and is parsed by the native CPython parser, making its syntax identical. A piece of code using JQuery may look as follows:
__pragma__ ('alias', 'S', '$') def start (): def changeColors (): for div in S__divs: S (div) .css ({ 'color': 'rgb({},{},{})'.format (* [int (256 * Math.random ()) for i in range (3)]), }) S__divs = S ('div') changeColors () window.setInterval (changeColors, 500)
Since Transcrypt uses compilation rather than interpretation, imports have to be decided upon compile time, to allow joined minification and shipment of all modules involved. To this end C-style conditional compilation is supported, as can be seen in the following code fragment:
__pragma__ ('ifdef', '__py3.6__') import dashed_numbers_test # Import only for Python 3.6, that supports them __pragma__ ('endif')
The same mechanism is used in the Transcrypt runtime to switch between JavaScript 5 and JavaScript 6 code:
__pragma__ ('ifdef', '__esv6__') for (let aClass of classinfo) { __pragma__ ('else') for (var index = 0; index < classinfo.length; index++) { var aClass = classinfo [index]; __pragma__ ('endif')
In this way optimizations in newer JavaScript versions are taken into account, retaining backward compatibility. In some cases, the possibility for optimization is preferred over isomorphism:
# Translate i += 1 to i++ and i -= 1 to i-- if type (node.value) == ast.Num and node.value.n == 1: if type (node.op) == ast.Add: self.emit ('++') return elif type (node.op) == ast.Sub: self.emit ('--') return
Some optimizations are optional, such as the possibility to activate call caching, resulting in repeated calls to inherited methods being done directly, rather than through the prototype chain.
Static versus dynamic typing: Scripting languages growing mature
There has been a resurgence in appreciation of the benefits of static typing, with TypeScript being the best known example. In Python, as opposed to JavaScript, static typing syntax is an integral part of the language and supported by the native parser. Type checking itself, however, is left to third party tools, most notably mypy, a project from Jukka Lehtosalo with regular contributions of Python initiator Guido van Rossum. To enable efficient use of mypy in Transcrypt, the Transcrypt team contributed a lightweight API to the project, that makes it possible to activate mypy from another Python application without going through the operating system. Although mypy is still under development, it already catches an impressive amount of typing errors at compile time. Static type checking is optional and can be activated locally by inserting standard type annotations. A trivial example of the use of such annotations is the mypy in-process API itself:
def run(params: List[str]) -> Tuple[str, str, int]: sys.argv = [''] + params old_stdout = sys.stdout new_stdout = StringIO() sys.stdout = new_stdout old_stderr = sys.stderr new_stderr = StringIO() sys.stderr = new_stderr try: main(None) exit_status = 0 except SystemExit as system_exit: exit_status = system_exit.code sys.stdout = old_stdout sys.stderr = old_stderr return new_stdout.getvalue(), new_stderr.getvalue(), exit_status
As illustrated by the example, static typing can be applied where appropriate, in this case in the signature of the run function, since that is the part of the API module that can be seen from the outside by other developers. If anyone misinterprets the parameter types or the return type of the API, mypy will generate a clear error message, referring to the file and line number where the mismatch occurs.
The concept of dynamic typing remains central to languages like Python and JavaScript, because it allows for flexible data structures and helps to reduce the amount of source code needed to perform a certain task. Source code size is important, because to understand and maintain source code, the first thing that has to happen is to read through it. In that sense, 100 kB of Python source code offers a direct advantage over 300 kB of C++ source that has the same functionality, but without the hard to read type definitions using templates, explicit type inspection and conversion code, overloaded constructors and other overloaded methods, abstract base classes to deal with polymorphic data structures and type dependent branching.
For small scripts well below 100kB source code and written by one person, dynamic typing seems to only have advantages. Very little planning and design are needed; everything just falls into place while programming. But when applications grow larger and are no longer built by individuals but by teams, the balance changes. For such applications, featuring more than roughly 200kB source code, the lack of compile time type checking has the following consequences:
- Many errors are only caught at runtime, often late in the process, making remedies more expensive, since they influence more code already written.
- Module interfaces tend to be open to several interpretations, due to the lack of type information they carry. This means that more development time is consumed by consultation between team members to establish the correct use of a module API.
- Especially when working with a large team, dynamically typed interfaces can lead to unwanted coupling of design decisions taken in distinct modules. Thin, well specified interfaces become a necessity.
An interface featuring even one parameter that may refer to a complex, dynamically typed object structure, cannot be considered sufficiently stable to warrant separation of concerns. While this type of 'who did what, why and when' programming accounts for tremendous flexibility, it also accounts for design decisions being postponed to the very last, impacting large amounts of already written code, requiring extensive modifications.
The 'coupling and cohesion' paradigm applies. It's OK for modules to have strong coupling of design decisions on the inside. But between modules there should preferably be loose coupling, a design decision to change the inner workings of one module should not influence the others. In general, this leads to the following rules of the thumb for the choice between dynamic and static typing.
- Inside a particular module design decisions are allowed to be coupled. Designing it as a cohesive entity will result in less source code to read through and ease experimentation with different implementations. Dynamic typing is an effective means to this end, imposing minimum design time overhead at maximum flexibility.
- On the boundaries between modules, developers will have draw up stable 'contracts' on what information to exchange exactly. In this way they can work in parallel without constant deliberation and aim for a fixed, rather than a moving target. Static typing fits the bill here, allowing formal, machine validated agreement upon which information crosses the API.
So while the current surge in static typing may seem like a regression, it isn't. Dynamic typing has earned its place and it won't go away. The opposite is also true: even a traditionally statically typed language like C# has absorbed dynamic typing concepts. But with the complexity of applications written in languages like JavaScript and Python growing, effective modularization, cooperation and unit validation strategies gain importance. Scripting languages are coming of age.
Why choose Python over JavaScript on the client?
Due to the immense popularity of programming for the web, JavaScript has drawn lots of attention and investment. There are clear advantages in having the same language on the client and on the server. An important advantage is that it becomes possible to move code from server to client in a late stage, when an application is upscaled.
Another advantage is unity of concept, allowing developers to work both on the front end and the back and without constantly switching between technologies. The desirability of decreasing the conceptual distance between the client and server part of an application has resulted in the popularity of a platform like Node.js. But at the same time, it carries the risk of expanding the 'one size fits all' reality of current web client programming to the server. JavaScript is considered a good enough language by many. Recent versions finally start to support features like class based OO (be it in the form of a thin varnish over its prototyping guts), modules and namespaces. With the advent of TypeScript, the use of strict typing is possible, though incorporating it in the language standard is probably some years away.
But even with these features, JavaScript isn't going to be the one language to end all languages. A camel may resemble a horse designed by a committee, but it never becomes one. What the browser language market needs, in fact what any free market needs, is diversity. It means that the right tool can be picked for the job at hand. Hammers for nails, and screwdrivers for screws. Python was designed with clean, concise readability in mind right from the start. The value of that shouldn't be underestimated.
JavaScript will probably be the choice of the masses in programming the client for a long time to come. But for those who consider the alternative, what matters to continuity is the momentum behind a language, as opposed to an implementation of that language. So the most important choice is not which implementation to use, but which language to choose. In that light Python is an effective and safe choice. Python has a huge mindshare and there's a growing number of browser implementations for it, approaching the golden standard of CPython closer and closer while retaining performance.
While new implementations may supersede existing ones, this process is guided by a centrally guarded consensus over what the Python language should entail. Switching to another implementation will always be easier than switching to the next JavaScript library hype or preprocessor with proprietary syntax to deal with its shortcomings. Looking at the situation in the well-established server world, it is to be expected that multiple client side Python implementations will continue to exist side by side in healthy competition. The winner here is the language itself: Python in the browser is there to stay.
About the Author
Jacques de Hooge MSc is a C++/Python developer living in Rotterdam, the Netherlands. After graduating from the Delft University of Technology, department of Information Theory, he started his own company, GEATEC engineering, specializing in Realtime Controls, Scientific Computation, Oil and Gas Prospecting and Medical Imaging. He is a part-time teacher at the Rotterdam University of Applied Sciences, where he teaches C++, Python, Image Processing, Artificial Intelligence, Robotics, Realtime Embedded Systems and Linear Algebra. Currently he's developing cardiological research software for the Erasmus University in Rotterdam. Also he is the initiator and the lead designer of the Transcrypt open source project.
Community comments | https://www.infoq.com/articles/transcrypt-python-javascript-compiler/?itm_source=articles_about_python&itm_medium=link&itm_campaign=python | CC-MAIN-2021-49 | refinedweb | 3,686 | 51.78 |
Hey, World. Recently I've cracked open a book dealing with java for beginners. Each page my fingers turned, there was a problem to be solved, it led to brain busters, teasers, and much more. Unfortunately for me today, I've crossed a puzzle that I cannot crack. Here's my code:
class Movie { String title; String genre; int rating; void playIt() { System.out.println("Playing the movie"); } } public class MovieTestDrive { public static void main (String[] args){ Movie one = new Movie (); one.title = "Gone with the Stock"; one.genre = "Tragic"; one.rating = -2; Movie two = new Movie (); two.title = "Lost in Cubicle Space"; two.genre = "Comedy"; two.rating = 5; two.playIt(); Movie three = new Movie (); three.title = "Btye Club"; three.genre = "Tragic but ultimately uplifting"; three.rating = 127; } }
The problem is that I don't think it's working correctly, what exactly is this example in this book is trying to make me understand? The only result that I'm getting is "playing the movie." This is my first time working with a main class and sub class. Anyway, please explain this to me as you would a uneducated 10 year old. | http://www.javaprogrammingforums.com/whats-wrong-my-code/29149-help-testing-objects.html | CC-MAIN-2015-06 | refinedweb | 192 | 78.65 |
I’m going to take a break from my last couple of posts which have been about time-series analysis in python. I’m switching gears this week to answer a question my brother asked me today at a party. I couldn’t really give him a satisfactory answer, because we were at a party. He asked me, “I’ve been studying some linear algebra. It’s a really cool subject, but it seems very abstract. I’m just having a hard time seeing how it can be applied. How does it get used in data science?” First I struggled to not do a spit take, because linear algebra is probably the most useful mathematics that I have learned to date. After that urge had passed, I fumbled around trying to verbally explain how you can use linear algebra.
Okay, so for anyone that has studied any data science, and/or machine learning, knows that linear algebra is crazy useful. But I can see where somebody who does not have the background that I do might be confused. So in this brief tutorial I’m going to give a couple of examples of how linear algebra can be useful. And we’ll write some python to make things abundantly clear.
Writing things down that humans can read
Alright, so this really isn’t an application to data science per se, but I think that it is an important point. Linear algebra let’s us talk about really complicated things. It gives us a compact way of expressing ideas that otherwise would be tedious to try to figure out what was going on. Let me give you an example. We can write a set of equations like this:
This is the formula for a linear regression, assuming that N is the set of indexes in your dataset. It looks pretty terrible. It is readable, but it hurts your eyes, and it makes it hard to really get a sense of what is going on, unless you are really familiar with this type of notation. Linear algebra can step in and simplify this notation.
Isn’t that just cleaner and easier to read? Y is equal to X times b. Where Y is the vector of response variables, X is the matrix of covariates and b is the vector of parameters. We now have linear algebra depiction of the same equation. But this notation just makes things simpler to read, as long as you know what is going on under the hood. But it simplifies things.
Machine Learning is Mostly Mucking About With Linear Algebra
So OLS is probably the most simplistic machine learning model, and it amounts to finding the beta values. Okay, so for anyone that has seen how to derive parameter estimates for OLS is going to know that I am cheating a little bit in this section, because I am not going to do any calculus. If you don’t do the calculus, you can’t guarantee that this will work, but OLS is simple enough that I can get away with what I am going to do in this section. And I know that it is technically the incorrect process, so no hate mail please.
The way that this works is that we are going to simply going to take the equation for a linear regression in matrix form and simply rearrange it to solve for the beta coefficients. It turns out that if we do this, we will get the formula that we would get by doing the calculus. That happens mainly because it is linear. But in general it wouldn’t work. We just get lucky that doing so will give us the right answer. My point is that linear algebra is what machine learning is doing under the hood.
Here’s the formula again:
Now we want to solve for b. In normal algebra we would just divide by X and be done with it. But in linear algebra, for lots of reasons, you can only divide by a square matrix. In the case of OLS, the X matrix is almost always not square. But there is a neat trick that you can apply to transform any matrix into a square matrix. It is multiplying intself with its transpose. Depending on whether you pre or post multiply the transpose you will either get a square matrix with the size of columns or rows. Fortunately, for us this decision is made for us because the vector b is in the way and post-multiplying won’t work. One more thing to remember is that this is an equation so anything that we do to one side we have to do to the other side as well. So pre-multiplying gives us this equation:
Now
is a square matrix so we can divide both sides by that. Which amounts to pre-multiplying both sides by its inverse. So let’s go ahead and do that:
And since
is just the identity matrix we can simplify the right-hand side to just
:
And there you go. There is the linear algebra version of the OLS equation. For those savy with statistics in linear algebra it says that b is equal to the covariance of X and Y divided by the variance of X.
So let’s write some python code so that you can see this in action. We’ll run a regression, and then we’ll write our own regression algorithm to give you the parameter values using the formula that we derived above, and then we can compare the two methods to see that statsmodels is doing this type of linear algebra under the hood.
So first we’ll import some useful libraries, namely numpy and statsmodels. Statsmodels is just for comparison purposes, for our algorithm that we develop.
import numpy as np import statsmodels
Okay, so now we just need to generate some data. For our intercept we’ll make that a 5 and for our slope we’ll make that 15, we’ll also add in a little bit of noise to make things more realistic.
x=np.matrix([range(1000),[1]*1000]).T y=15*x[:,0]+5+np.random.normal()
You may be wondering what the column of 1’s is. That is our intercept column. And the numbers 0 to 999 are our X variable. So let’s see if statsmodels can recover our true coefficients of 15 and 5. Here’s the code to run OLS in statsmodels:
model=sm.OLS(endog=y,exog=x) results=model.fit() print(results.summary())
And here is the results of that regression:
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 1.000
Model: OLS Adj. R-squared: 1.000
Method: Least Squares F-statistic: 1.178e+33
Date: Sun, 09 Jul 2017 Prob (F-statistic): 0.00
Time: 08:02:37 Log-Likelihood: 24829.
No. Observations: 1000 AIC: -4.965e+04
Df Residuals: 998 BIC: -4.964e+04
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975] ——————————————————————————
x1 15.0000 4.37e-16 3.43e+16 0.000 15.000 15.000
const 5.0842 2.52e-13 2.02e+13 0.000 5.084 5.084
==============================================================================
Omnibus: 793.671 Durbin-Watson: 0.001
Prob(Omnibus): 0.000 Jarque-Bera (JB): 79.767
Skew: -0.339 Prob(JB): 4.77e-18
Kurtosis: 1.793 Cond. No. 1.15e+03
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 1.15e+03. This might indicate that there are
strong multicollinearity or other numerical problems.
Statsmodels didn’t do too bad, it has a whole bunch of extra information (which is just more linear algebra, but we won’t get into that today). Now let’s apply the formula we derived above, and see what it comes up with for parameter values. Here’s the code:
print((x.T*x).I*x.T*y)
Well, that was relatively painless. Remember .T means transpose and .I means inverse, so this is just a straight application of the formula that we derived above. Here is the result:
[[ 15. ] [ 5.08421641]]
No big surprises here, since it is exactly the same as what statsmodels gave you. I hope that is sufficient evidence that machine learning is just (slightly more complicated than this example) linear algebra under the hood.
Markov the Grandfather of Google
That’s Andrey Markov. Without him, we wouldn’t have google. He came up with a really nifty idea which bears his name. The markov chain. And if you are into Bayesian statistics, we wouldn’t have the Markov Chain Monte Carlo algorithm, and bayesian statistics would be inaccessible too.
I want to finish up with just one more application of linear algebra which is really cool. It is a Markov Chain. Markov Chains are cool because they will reach an equilibrium value. It turns out that in its most simplistic form google is just calculating a bunch of equilibrium values for Markov Chains. Thank you google! If it didn’t do this we would still be stuck in a world where the internet was indexed alphabetically (which is why, when Amazon started, Jeff Bezos chose the name “Amazon”. He wanted it to show up at the top of the list!)
So let’s start off with what is a markov chain. Let’s suppose that you have a bunch of states that you can be in. To keep things concrete let’s say web pages that you are viewing. And depending on which page you are on, you are more likely to go to some pages than others next. For example, if you have a link on a page, you are more likely to click on that link and go to that page than say jump to a random page.
We can build a giant square matrix with these probabilities where the rows represent the page you are on, the columns represent the page that you are going to go to, and the elements of the matrix are the probability that the next time I look you will be at the page corresponding to the column if you were on the page corresponding to the row. That means that every row should add up to 1, which is important so that we converge to an equilibrium value. I won’t go into detail on how to derive these probabilities, just know that google obviously figured out how to do it, by collecting gobs of data when the internet was shiny and new. Let’s just assume that we staight up know the probabilities instead, and for simplicity that the internet contains only 3 pages.
Here’s some code to generate this matrix:
M=np.matrix([[0.25,0,.75], [.2,.6,.2], [0,.9,.1]])
So if I’m on page 1 there is a 25% chance that I will stay there, and a 75% chance I will go to page 3.
If I am on page 2 there’s a 20% chance of going to page 1 or to page 3, a 60% chance of staying on page 2.
If I am on page 3, there is a 90% chance of going to page 2 and a 10% chance of staying on page 3.
By multiplying this matrix with itself, you can “walk forward” in time. If I start off randomly on 1 of the pages and walk forward through time, where will I end up (most likely)? Here’s why it is called a chain, a square matrix multiplied by itself will be a square matrix with the same dimensions.
We can do something like this: M*M*M*M*M*M*M*M*M*M or
to see where we would end up 10 steps into the future. And if you do enough of these, you will end up with a stable probability distribution of where you will be at any given time. Let’s see how our small interent will progress over time we’ll only take 3 steps into the future:
def progression(n): for i in range(n): print(M**(i+1)) print('') progression(3)
And we’ll get the following output:
[[ 0.25 0. 0.75] [ 0.2 0.6 0.2 ] [ 0. 0.9 0.1 ]] [[ 0.0625 0.675 0.2625] [ 0.17 0.54 0.29 ] [ 0.18 0.63 0.19 ]] [[ 0.150625 0.64125 0.208125] [ 0.1505 0.585 0.2645 ] [ 0.171 0.549 0.28 ]]
You can see that the probabilities are already starting to converge on some values. It looks like we’ll be spending about 15% of our time on page 1, about 58% of our time on page 2 and something like 27% of our time on page 3. Indeed the final values 100s of steps in the future are pretty close to these numbers.
Google’s thought is then to display the page that everyone will end up on at the top and then the next one, and then the one after that, and so on. In our example they will give us page 2 then page 3 then page 1, in that order. Google will then let you restrict the universe of pages on the internet that you are looking at, to just pages with certain keywords. In a nutshell, if I say to google show me pages with the words “bananas that turn purple” they will do this markov chain analysis only on pages with that contain the phrase “bananas that turn purple”, and return the results in order of relevance. This is how all modern search engines work, with some tweaks here and there for efficiency.
Some Last Words on Linear Algebra
I barely scratched the surface on what linear algebra can do. But I think that this is a better answer than what I fumbled through with my brother. So hopefully, you got an appreciation for how powerful this subject in mathematics can be.
How useful is Linear Algebra, very. It is the math that I make use of the most. So yeah. Now that you have some linear algebra skills let me know in the comments how you are using linear algebra. I’d really like to know what other applications people are doing.
Oh and here is the full code for everything that we have done in python.
#!/usr/bin/env python3 # -*- coding: utf-8 -*- """ Created on Sun Jul 9 07:44:14 2017 @author: ryan """ import numpy as np import statsmodels.api as sm x=np.matrix([range(1000),[1]*1000]).T y=15*x[:,0]+5+np.random.normal() model=sm.OLS(endog=y,exog=x) results=model.fit() print(results.summary()) print((x.T*x).I*x.T*y) #%% M=np.matrix([[0.25,0,.75], [.2,.6,.2], [0,.9,.1]]) def progression(n): for i in range(n): print(M**(i+1)) progression(10) | https://barnesanalytics.com/linear-algebra-and-data-science | CC-MAIN-2018-13 | refinedweb | 2,510 | 72.16 |
javadoc: No public or protected classes found to document.
Robin Clark
Ranch Hand
Posts: 81
posted 12 years ago
I have 5 classes in my directory which is named "vss". Each of these classes is in the vss directory.
When I run the command:
javadoc -d ..\..\..\docs -private -windowtitle "VSS Classes" -subpackages *.java -linksource
All of the classes are documented except for one. When I try to run javadoc on just that class:
javadoc -d ..\..\..\docs -private -windowtitle "VSS Classes" -subpackages Session.java -linksource
I get the error message:
Constructing Javadoc information... Standard Doclet version 1.4.1 Generating ..\..\..\docs\constant-values.html... javadoc: No public or protected classes found to document. 1 error
This is the second time I've had this problem in the past two days, so I guess I need to figure it out instead of re-organizing my classes like I did the last time!
Here is the first portion of Session.java:
package vss; import java.io.*; import java.text.*; import java.util.*; /** * Keeps a list of audio files for a single Session for a single Speaker. * Each audio file is represented by a String (its filename) and is stored in a Vector. * * @author rclark */ public class Session implements Serializable { /** * The full pathname of the directory that contains the utterances that * will be processed during this test run */ private String uttDestination; private String trueSpeakerStr; private String imposterStr; /** * A <code>String</code> that represents the session ID, e.g., S01, S02, S03... */ private String sessionID; /** * The first thru 15th characters of the filename, e.g. F5137231111S01C */ private String audioFileRoot; private static HashMap promptMap;
Layne Lund
Ranch Hand
Posts: 3061
posted 12 years ago
It's been a while since I ran javadoc by hand. More recently I have an
ant
build script that automates many such tasks for me. I suspect that you don't need the .java extension. Try removing it and see if that fixes your problem.
Layne
Java API Documentation
The Java Tutorial
Post Reply
Bookmark Topic
Watch Topic
New Topic
Similar Threads
Running unit test cases automatically with Ant
JAVA DOC
javadoc
Java Docs
NX: javadoc problem with assert keyword | http://www.coderanch.com/t/395604/java/java/javadoc-public-protected-classes-document | CC-MAIN-2016-26 | refinedweb | 359 | 65.52 |
The Datomic Information Model
- |
-
-
-
-
-
-
-
Read later
My Reading List
Datomic considers a database to be an information system, where information is a set of facts, and facts are things that have happened. Since one can't change the past, this implies that the databaseaccum. In a previous article, I covered the Datomic architecture. This article will focus on the information model and programming experience.
Traditional databases (and many new ones!) focus on 'now' - the set of facts currently true, but, in doing so, they lose information. Businesses are finding increasing value in historical information, and there are very few reasons not to preserve it. This is not merely a matter of keeping history around, as with backups or logs, but making it available to support the active decision making process. It is necessary for a business to know your current address in order to ship you something, but 'which customers move frequently, and where from?' might be very interesting to their marketing or product development teams. Ditto supplier price histories etc. They don't want to have to restore backups or replay logs in order to find out.
It's interesting to consider why keeping active history is even in question. After all, before computers we kept records by accretion, and, as the adage goes, 'accountants don't use erasers'. I'll conjecture that early computing systems simply didn't have the capacity (or no one could afford it). But that presumption deserves rethinking, given the million-fold increase in capacity during the past 25 years. What developers eschew revision control systems like git because their codebases will no longer fit on a floppy?
A database is a database in large part due to the leverage it provides over the data. Otherwise, it is just a storage system. This leverage usually comes from a combination of organizing the data (e.g. via indexes) and query systems which leverage that organization. Developers are getting interesting and ever more capacious distributed and redundant storage systems at their disposal, but often with decreasing leverage. Datomic seeks to work atop these storage systems to take advantage of their scalability, storing organized information in them and putting leverage back in the hands of developers.
Structure and Representation
Every database has a fundamental unit at the bottom of its model, e.g. a relation, row or document. For Datomic, that unit is the atomic fact, something we call a Datom.
A Datom has the following components:
- Entity
- Attribute
- Value
- Transaction (database time)
- Add/Retract
This representation has obvious similarities to the Subject/Predicate/Object data model of RDF statements. However, without a temporal notion or proper representation of retraction, RDF statements are insufficient for representing historical information. Being oriented toward business information systems, Datomic adopts the closed-world assumption, avoiding the challenges of universal naming, open-world, shared semantics etc of the semantic web. A Datom is a minimal and sufficient representation of a fact.
Having an atomic unit at the bottom of the model ensures that representations of novelty (e.g. transactions) are only as big as the new facts themselves. Contrast this with resubmitting an entire document in order to update part of it, or the brittleness of delta schemes which attempt to avoid that.
Datoms constitute a single, flat, universal relation, and there is no other structural component to Datomic. This is important, as the more structural components you have in your model the more rigidity you get in your applications. For instance, in a traditional relational database, each relation must be named, and you need to know those names in order to locate your data. Worse, arbitrary join tables need to be created in order to model, e.g. many-to-many relations, and the names for these fabrications must be known as well. Extreme effort must be applied to provide a set of logical views in order to isolate applications from the physical structural decisions, but those views are no less numerous or specific. Document stores are even more structurally rigid, as the hierarchy within your documents is hard-coded throughout your applications, with few if any view-like tools to provide indirection from the structure.
Schemas
All databases have schemas. The only differences are how much they support (or require) schemas being explicit. In the case of Datomic, attributes must be defined before they are used.
Attributes are entities themselves, with attributes for the following (among others):
- name
- data type of values
- cardinality (attributes can be many-valued)
- uniqueness
- indexing properties
- component nature (your foot is a component of you, but your mother is not)
- documentation
There are no constraints on the attributes that can be applied to entities, thus entities are open and sparse. Attributes can be shared across entities, and namespaces can be used to avoid collisions. The following specifies a
:person/name
attribute:
{:db/ident :person/name, :db/valueType :db.type/string, :db/cardinality :db.cardinality/one, :db/doc "A person's name"}
Schema, like all interaction with Datomic, is represented by data, the above being a representation of a map in edn format. There is no DDL.
With these simple primitives of datoms and sparse, (possibly) multi-valued attributes, one can represent row-like tuples, hierarchical document-like entities, column-store-like columns, graphs etc.
Transactions
At their most basic level, transactions in Datomic are simply lists of assertions and retractions submitted and accepted into the database atomically. A basic transaction is just a list of datoms:
[[:db/add entity-id attribute value] [:db/add entity-id attribute value]...]
Again, all interaction with Datomic is represented by data, the above being a representation of a list of lists in edn format, each inner list representing a datom in
[op entity attribute value]
order. If you want to submit several facts about the same entity you can use a map instead:
[{:db/id entity-id, attribute value, attribute value} ...]
While it is necessary to express them as text in an article, it is quite important to the design of Datomic that transactions are actually ordinary data structures (i.e. j.u.Lists, j.u.Maps, arrays etc) you can build in your language. The primary interface to Datomic is data, not strings, not DML.
Notice how you do not specify the transaction part of the datoms. It will be filled in by the transactor. That said, transactions are themselves entities and a transaction can assert facts about the transaction itself, such as metadata about provenance, external time, the originating process etc.
Of course, not every transformation can be expressed merely as assertions or retractions without devolving into last-one-wins races and conflicts. Thus Datomic supports the notion of database functions. These are functions written in an ordinary programming language (e.g. Java or Clojure) that get installed into the database (submitted as data via transactions, of course). Once installed, a database function 'call' can be part of a transaction:
[[:db/add entity-id attribute value] [:my/giveRaise sally-id 100] ...]
When used as part of a transaction, a database function is considered a transaction function, and gets passed an additional first argument which is the in-transaction value of the database itself. Thus the function can issue queries etc. A transaction function must return transaction data. Whatever data it returns replaces it in the transaction. This process is repeated until all transaction functions have returned simple add/retracts. Thus in the transaction above, the giveRaise function might look up Sally's current salary, find it to be 45000, and return an assertion about the new value, making the resulting transaction data look like this:
[[:db/add entity-id attribute value] [:db/add sally-id :employee/salary 45100] ...]
Since :employee/salary is cardinality one, adding this fact about Sally's salary implicitly retracts the prior fact. Because transaction functions run atomically and serially within transactions, they can be used to perform arbitrary, conflict-free transformations. You can read more about database functions in the documentation.
Connections and Database Values
On the write side, things seem pretty ordinary. You obtain a connection to a database using a URI that includes information about how to reach storage, and, via storage, how to talk to the current transactor. Transactions are issued by calling the transact function on the connection, passing transaction data as described above.
On the read side, things are quite different. In a traditional database, reading and querying is also a function of the connection. You pass a query over the connection, it reaches the server where it is run in the (usually unreproducible) context of the current database state, subject to the limits of the query language embedded in the server, competing for resources and synchronization with all other users, including writers.
By contrast, in Datomic the only read operation of connection is db(), and it doesn't actually reach out over the wire at all. Instead, the connection is continually being fed enough information such that it can immediately deliver the value of the database for use as an immutable object in your application. Thus all consumption of the data, querying etc happens locally (the engine will transparently reach out to storage to obtain data as needed). Note that the entire database is not kept on each application server peer, just the most recent novelty and pointers to the rest in storage. Nor does any 'snapshotting' operation occur. While it feels to the application and query engine that the database is in hand, the realization is quite lightweight, just a few references to persistent data structures, in memory and in storage. Extensive caching happens under the hood.
Query
In Datomic, query is not a function of a connection, and is not even a function of a database. Instead, query is a stand-alone function that takes one or more data sources as arguments. These data sources can be database values or ordinary data collections, or any combination thereof. This is a big benefit of freeing query from running within the context of a database.
The Datomic peer library comes with a query engine based upon Datalog. Datalog is a declarative query language based upon logic, with a pattern-matching flavor well suited to querying datoms and in-memory collections.
The basic form of query is:
{:find [variables...] :where [clauses...]}
Or, this alternative (easier to type) list form:
[:find variables... :where clauses...]
Again, these are just text representations of data structures that you could build programmatically - queries are data, not strings, although strings are accepted and turned into data when supplied.
If you had a database containing these datoms (where sally, fred and ethel are stand-ins for their entity ids):
[[sally :age 21] [fred :age 42] [ethel :age 42] [fred :likes pizza] [sally :likes opera] [ethel :likes sushi]]
We could ask a query like this:
;;who is 42? [:find ?e :where [?e :age 42]]
And get this result:
[[fred], [ethel]]
:where clauses match positionally, and for database sources, each datom matches as if a tuple of
[entity attribute value transaction].
You can elide any portions on the right (transaction in this case). Symbols beginning with ? are variables, and the result will contain tuples of values of the variables for any source tuple that matches.
Joins are implicit, and occur whenever you use a variable more than once:
;;which 42-year-olds like what? [:find ?e ?x :where [?e :age 42] [?e :likes ?x]
which returns:
[[fred pizza], [ethel sushi]]
The API for query is a function called q:
Peer.q(query, inputs...);
where inputs can be databases, collections, scalars etc. Queries can also utilize (recursive) rules, and call your own code. You can find more information about query in the documentation.
Putting it all together:
//connect Connection conn = Peer.connect("a-db-URI"); //grab the current value of the database Database db = conn.db(); //a string for now, because Java doesn't have collection literals String query = "[:find ?e :where [?e :likes pizza]]"; //who likes pizza? Collection result = Peer.q(query, db);
Same query, different basis
Things start to get interesting when we leverage the fact that the db has all the historical information:
//who liked pizza last week? Peer.q(query, db.asOf(lastTuesday));
The asOf method of a database returns a view of that database as of a prior point in time, specified by date-time or transaction. Note how we haven't gone back to the connection, nor changed the query. If you've ever rolled your own timestamps, you know a temporally-qualified query is usually much different than one for 'now'. There is a corresponding since method as well.
//what if we added everyone from Brooklyn? Peer.q(query, db.with(everyoneFromBrooklyn));
The with method takes transaction data and returns a local value of the database with that data added. No transaction is issued over the connection. Thus you can do speculative, what-if queries, or check transaction data before issuing it. There is also a filter method which returns the database filtered by some predicate. Again, we haven't touched the connection, db or query.
What if we want to test the query without setting up a database? We can simply supply data in the same shape:
//test the query without a database Peer.q(query, aCollectionOfListsWithTestData);
Again, the query is unchanged, but actually runs. Contrast that with mocking a database connection.
So far all of the techniques have worked with a specific point in past or future time. But many interesting analyses will want to look across time:
//who has ever liked pizza? Peer.q(query, db.history());
The history method will return all datoms across time. This can be combined with, e.g. asOf etc. This query happens to work as-is, but often time-crossing queries will be different, do aggregation etc.
Queries can take more than one data source, and thus can easily cross databases, or use different views of the same database. Being able to pass collections to queries is like parameterized statements on steroids.
Different queries (or participants), same basis
Database values are immutable, so you can do a non-transactional, multi-step calculation and know nothing has changed. Similarly, the basis point of a database can be obtained and passed to another process, which can then get a database value in the same state. Thus different queries, separated by process or time, can work with the exact same basis.
Direct index access
Finally, the database values offer a high-performance API for iterative access to the underlying sorted datoms from the (immutable) indexes. This is the raw material from which other query approaches can be built. For instance, via this API you can query Datomic databases using Clojure's Prolog-like core.logic library.
Summary
I hope this has given you a feel for the nature of Datomic's information model and some of its details. Treating the database as a value is very different and powerful, and I think we are all still discovering the possibilities!/West -!
Question
by
peter lin
When the application needs the entire customer object, the cost of reconstructing it from all the pieces adds overhead. Take it a bit further. Say I'm building a temporal database for auto insurance policy. A commercial auto policy might have 200-1000 cars for a taxi service. That means the object graph could have 5000 objects. Using traditional ORM approach, the number of rows a system would need to load would be 5000 objects x 10 version = 50,000 objects. Once it has those objects it has to reconstruct each version. Obviously, this is slow and CPU intensive.
If I want to load the last 10 versions of a policy with Datomic, how many queries would it take to reconstruct those 10 policy records? If I understand Datomic correctly it should be something like
sum(datom for each object) + d = rows returned
where d is the number of changes from a starting time.
Using ACORD schema as an example, the number of datoms might be as low as 80 or as high as several hundred. Usually, the bulk of the data is vehicle, coverage and endorsement. If I use 40 as the base number of datoms, 200 cars, 50 datoms for vehicle/coverage/endorsement fields and 30 changes in those 10 versions, I get this:
40 + (200 x 50) + 30 = 10,070 rows of data. As the number of vehicles grows, the number of queries grows rapidly. How does Datomic address this challenge?
How long would it take Datomic to reconstruct those 10 versions? I was planning on doing this experiment later this summer.
thanks.
Re: Question
by
Alexander Kiel
I think it's better to ask such questions in the Datomic Google Group.
Alex
Re: Question
by
peter lin | http://www.infoq.com/articles/Datomic-Information-Model | CC-MAIN-2016-18 | refinedweb | 2,790 | 54.63 |
Subject: Re: [boost] [review][mp11] Reminder of ongoing formal review
From: Jason Rice (ricejasonf_at_[hidden])
Date: 2017-07-22 19:02:13
Hello, my name is Jason Rice, and this is my first Boost review so please
take that into consideration. I'm also new to this mailing list thing so
let me know if I did something wrong here.
Please consider the following review for the proposed Boost.Mp11 library:
1. Should Mp11 be accepted into Boost? Please state all conditions
for acceptance explicity.
ACCEPT. No conditions
2. What is your evaluation of the design?
The interface is very straight forward.
It is mostly templates but the user is not burdened with accessing a
nested "type" with every metafunction.
The list algorithms are flexible in that they take any kind of list which
is very handy.
I did notice that the user is burdened with ensuring that a list is not
const or they will just get a "implicit instantiation of undefined
template" error which doesn't really point to the problem.
(e.g. when using mp_append with many lists)
The `mp_` prefix is welcome if this is destined for the `std` namespace.
The `tuple_apply` could probably just be called `apply` since appears to
be a shim of `std::apply`.
The interface of map functions accepting any list of list is very nice.
3. What is your evaluation of the implementation?
The implementation is very clean. I looked at `mp_map_find_impl` and it
is very concise and easy to understand.
I did find it annoying though that some of the public facing functions
are defined inside detail files which makes the source code a little more
difficult to traverse.
I'd like to see mp_product perform on par with the other libs on
metaben.ch
4. What is your evaluation of the documentation?
For the interface, it gets the job done.
The table of contents has some text wrapping that could be fixed.
There is no instructions for installation that I could find anywhere.
It fails to mention that Boost.Config is a dependency. (AFAICT)
5. What is your evaluation of the potential usefulness of the library?
I use Boost.Hana quite a bit. I'm currently using a library like this to
help where optimization for compile-time performance is needed so it is
definitely useful.
6. Did you try to use the library? With what compiler? Did you have
any problems?
I used it in my library Nbdl to augment my use of Boost.Hana in a couple
of places.
The compiler is a recent pull of the Emscripten fork of clang targeting
X86.
I did get stuck for a while on a weird error I mentioned before because a
used a const list.
I didn't like having to also install Boost.Config. (I'm not using the
Boost monolith build)
7. How much effort did you put into your evaluation? A glance? A quick
reading? In-depth study?
I spent a few hours on it.
I used mp_append and mp_with_index in my lib with great results.
I just looked at the implementation of mp_map_find.
I scoured the documentation looking for stuff and seeing what else was
available.
8. Are you knowledgeable about the problem domain?
Yes, I have done TMP and I am also a contributor to Boost.Hana.
Jason Rice
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2017/07/237563.php | CC-MAIN-2022-21 | refinedweb | 577 | 69.07 |
Very nice, thanks for your great work, and also honored to Tonic-_-
I have one problem, since VAS save duplicate TFAR radio, so our clan make each serial number for each player, but when we brought from load, our distributed radio just gone. it's a bit sad to happen like this, can you provide something for this? it will be really great and honored :).
VAS load the radio with correct serial number, but yours just make it gone :(
Yeah, I'll put an option in the next version. You can also edit fn_presetSelected to fix it yourself. Just remove lines 65-68 and 37-40. The reason for this is with our clan, loading task force radio radios caused problems because people ended up with duplicates of the same radio IDs which screwed up the channels and stuff.
Thanks for your help, but still it seems can't detect some helmet items,
When i change load out, it didn't change the helmet gears, rather it stay before gear that already put on.
and it would be much better if we have some transfer function.
again thanks for fast help :).
@kgino1045: No worries, thanks for your testing.
Do you have any examples of helmets that don't work properly? It seems to work for the ones I use.
I'll look into the 'transfer' function. I'm not sure what it does to behonest.
For the Transfer it literally transfer people own loadout to other people.
We are using many cloth
here is what we are using massis's marsoc Swedish force TFA NSW
and there is few more but i can't remember it right now, i assume that it could happened because by addon confilct
all this list is not working properly, ofcourse we can wear in here but can't load it properly
Seems like only this one working
+ things would be great for future stuff
show slots rather then just save it randomly.
: seems like it save from bottom slot, not from top, in my case i number every single load out and i have like 200 loadouts.
but yours make it save from bottom when i make new loadouts.
Forexample, when i numbernig 006 it didn't line up with number but just spotted and formed from bottom slots (as far as i checked in vas)
for this side i can just make 200 slot and numbering each of em so i can figured my self but maybe some people need a order and sequence from very first time
Thanks for your suggestions. I will make it save from the first slot (in VAS). This is a silly bug that I should have noticed. In the meantime, if you want, you can change line 9 of fn_SavePressed.sqf to:
for[{_i = 0}, {(_i <= ASORGS_SaveSlots) && (_slot == -1)}, {_i = _i + 1}] do {
I think that will make it save from the start.
I'm not so sure about the Transfer thing.. it sounds difficult (and I'm not quite sure of the point). Note that you can use VAS along side ASORGS. If I get a lot of requests I'll work it in somehow.
Can you let me know any specific helmets that don't work? I tried a bunch out of massis's marsoc and TFA and haven't found any problems with saving/loading yet. The other mods are still downloading.
Thanks for reply to help,
The only working (partially) helmets are
and aram3 vanila
I'm highly doubt it caused by this
Sorry that i don't have much time to test with this but i'll test eventually so wait for it, i'll find what is making this stupid happening =_=;;
Very nice work! But i'm not able to save my loadouts becuase there is no "safe button" in the safe menu. Is that a general problem or do i have a special bug?
@SeeRocky: You should be able to click Save next to the preset list, and then Save again down the bottom under where it has Preset Name. Are you in very low res? Could you link a screenshot?
@kgino1045: I tried a selection of helmets from the mods @mas, @nsw_th, @sfp and @tfa. I haven't had any problems. Could you tell me what helmets are not appearing? For example, the ones I tried (all worked) were:
UN Helmet
TFA ECH (GRA)
OPSCORE FAST Helmet (PATCH)
AOR2 Cap
USMC Pro Helmet Camo2
Marsoc LCH Helmet Balaclava
USMC Booniehat Wood
MARSOC ECH Black
Wamako Militia Helmet
Marsoc LCH Helmet
Fur Hat
Soviet Helmet
As i test it seems like The NSW's and AV IndUS's all helmet is not working
NSW link
AV IndUS link
And i found two more problem that, when i try to save it, the scroll bar is invisible so i can't click scroll bar correctly and drag it, ofcourse i can just scroll down with mouse btn 3 but some people prefer scroll bar.
and when i click the count box, i should click infront of the number so i have to relocate my cursor eventually. it would be better if it just work when i click in count box.
@kgino1045: Thank you for your help. I think/hope I've found the problem. If you don't have a backpack or enough space for a helmet in your inventory when it tries to load, it won't load. Hopefully this will fix both the helmet and the TFAR problem (only 1 line added to config.sqf for TFAR option, which for you should be changed to false, but I think for most people left at true). Please let me know if it doesn't help.
To be honest, I never had enough presets to scroll down. I'll work on that next.
Unfortunately, to my knowledge, there's no way to set where the cursor is in a text box with script. The only other option would be to give a blank text box instead of the current number when you click it, but I'm not sure if that's better or worse.
Very nice, i like!! would it be possible to also enable all uniforms available in game with something like forceAddUniform? also just to be clear whats your take on using this in the MANW comp, wasnt sure about the "Non-commercial" part in the disclaimer? regardless very cool :D
Total comments : 39, displayed on page: 15
#include "ASORGS\menu.hpp"
class CfgFunctions
{
#include "ASORGS\cfgfunctions.hpp"
};
this addAction["<t color='#111111'>Gear Select</t>", "execvm 'ASORGS\open! | https://www.armaholic.com/page.php?id=26181 | CC-MAIN-2020-50 | refinedweb | 1,103 | 78.79 |
[ (02 of 12)
From: heels@lcp.com (Erik J. Heels) Newsgroups: misc.legal, misc.legal.computing Subject: Law-Related Resources on the Internet and Elsewhere (02 of 12) Date: 26 Dec 1996 11:11:58 GMT Message-ID: <law/net-resources/part02_851598585@rtfm.mit.edu> Reply-To: heels@lcp.com (Erik J. Heels) X-Last-Updated: 1995/06/12 Archive-name: law/net-resources/part02 Version: 6.0 Chapter 0. Introduction to The Legal List. This chapter gives an overview of The Legal List and of the Internet. 0.1. About This Book - What is The Legal List? The Legal List is the short, historical, name of this book, The Legal List: Internet Desk Reference. (The history of my self-published version is briefly described below.) The purpose of The Legal List is to provide a consolidated list of all of the law-related resources available on the Internet and elsewhere. There are only two requirements for a resource to be listed in The Legal List: 1) it must be law-related, and 2) it must be on the Internet. Of course, there are exceptions to this rule. First, since The Legal List itself is a law-related resource on the Internet, I list a few resources that do not contain any Internet resource (e.g. only a USPS mailing address may be provided). Second, a few bulletin board systems (BBSs) are included. Most BBSs are accessible only via telephone, but more and more are becoming accessible via the Internet as well. Third, most of the commercial online services (such as Prodigy and America Online) have law-related resources that are only accessible to service subscribers. The Legal List was originally created in the summer of 1992 as I was preparing to enter the University of Maine School of Law. Before I started law school, I wanted to compile a list of law-related resources that I could use as a legal research guide. I've been on the Internet since 1984, when I was was a freshman at the Massachusetts Institute of Technology (MIT), and, through the years, I have made a habit of jotting down noteworthy Internet-accessible resources. In the summer of 1992, there were few law-related resources on the Internet, and there was no comprehensive listing of these resources. With my personal list of noteworthy Internet-accessible resources as a starting point, I started to compile a separate list of law-related Internet-accessible resources. I called this list my legal list. As I discussed with others what I had been doing, they began to request copies of my list. In August, 1992, I sent the first version of The Legal List via electronic mail (e-mail) to those who had requested it. Since then, The Legal List has been updated approximately every six months. What started as a relatively short list for my own use has grown into the relatively large book you are now reading. Today, The Legal List--or often TLL for short--is available as a paperback book and as an ASCII text-only file. Details of how to get The Legal List are included in Section 0.1.3. As the print-and-pay portion of the copyright notice indicates, The Legal List is free on the Internet, but it costs if you print it. I believe that this arrangement is consistent with the spirit of providing free information on the Internet, while at the same time allowing for a reasonable compensation from those who want the value-added benefit of having a paper copy of The Legal List. I use both the paperback version and the ASCII text-only version of The Legal List. If I want to find something in the ASCII text-only version, I open the file with my word-processing software and do a key-word search. With the paperback version, I look in the index. 0.1.1. Disclaimer. I am committed to providing high-quality information, and as such, I have tried to verify all of the information in The Legal List. If I have not been able to verify a resource, I have indicated so. The appearance of any resource in The Legal List does not constitute endorsement of approval of the resource by the author, editors, and publisher of The Legal List. The author, editor, and publisher of The Legal List have made reasonable efforts to provide correct information, but the author, editor, and publisher cannot guarantee the accuracy of the information in The Legal List. Updates, additions, and corrections to The Legal List should be sent to legal-list@lcp.com. 0.1.2. Organization of The Legal List. The Legal List is primarily organized by the sponsoring organization of the law-related resource. There are three main categories of sponsors: government organizations (Chapter 2), educational institutions (Chapter 3), and commercial organizations (Chapter 4). Resources that are sponsored primarily by an individual, and not by the organization for which the individual works, are included in Chapter 4, because it is often difficult to distinguish the sole proprietor from the hobbyist. Certain typographical conventions should also be pointed out. Items that should be interpreted are listed in italics. For example, If I were instructed to type your name, I would type Erik J. Heels. Uniform Resource Locators (URLs) are listed for each Internet resource. I have followed the draft RFC standard dated 03/94, which is available via anonymous FTP from internic.net as /ftp/internet-drafts/draft-ietf-uri- url-03.txt. The URL for the URL draft standard is URL: In general, the URL will be in the format of connection- method://machine/path. In the above example, the connection-method is FTP, the machine is internic.net, and the path is /ftp/internet- drafts/draft-ietf-uri-url-03.txt. In this example, the final part of the path name contains the file name, draft-ietf-uri-url-03.txt, but not all URLs contain file names. The following is a chapter summary of The Legal List: Chapter 1. Talk, Talk, Talk. This chapter describes law-related listserv lists, Usenet newsgroups, BBSs, and online services. Listserv lists are like magazines in that one can subscribe and unsubscribe. There are lists for a wide range of law-related interests such as intellectual property (CNI-Copyright), fathers rights (FREE-L), and issues of interest to law students (LawSch-L). Usenet is the news network that is intertwined with, but independent from, the Internet. Chapter 2. Government Organizations. This chapter describes law-related resources made available by US government organizations. An organization in this chapter would most likely have a domain name ending in .gov (government). This chapter is divided into two sub-sections: 1) US Federal Government Organizations and 2) US State Government Organizations. Chapter 3. Educational Institutions. This chapter describes law-related resources made available by US educational institutions. An organization in this chapter would most likely have a domain name ending in .edu (education). This chapter is divided into two sub-sections: 1) US law schools, 2) other US educational institutions. Chapter 4. Corporations and Organizations. This chapter describes law- related resources made available by for-profit, nonprofit, and not-for- profit corporations and organizations. An organization in this chapter would most likely have a domain name ending in .com (commercial) or .org (organization). Law firms are listed separately--sorted by the state (or country) of their main office. This chapter also includes resources primarily made available by individuals rather than by an organizations, governments, or educational institutions. Chapter 5. Non-US Resources. This chapter describes law-related resources made available by non-US organizations, governments, and educational institutions including those made available by the United Nations. Appendix A. More About the Internet. This appendix contains, for example, information about Internet account and domain providers. 0.1.3. How to Get Paperback and Electronic Copies of The Legal List. Listserv Lists There are two listserv lists available: 1) Full text delivery of The Legal List - legal-list. The Legal List is available via e-mail via the listserv list legal- list@lcp.com. To subscribe to legal-list, send a message with subscribe legal-list your name in the body of the message to the following address. URL: mailto:listserv@lcp.com The next version of The Legal List (as well as other announcements) will be mailed to those who subscribe. I always like to hear where you learned about The Legal List, so if you also include this information in the body of the message, I would greatly appreciate it! To cancel your subscription to legal-list, send a message with unsubscribe legal-list in the body of the message to the following address. URL: mailto:listserv@lcp.com 2) Announcements only - TLL-announce. If you wish receive only announcements about the next version of The Legal List, send a message with subscribe TLL-announce your name in the body of the message to the following address. URL: mailto:listserv@lcp.com TLL-announce subscribes will receive all of the announcements that legal-list subscribers receive, but TLL-announce subscribers will not receive the next version of The Legal List via e-mail. I always like to hear where you learned about The Legal List, so if you also include this information in the body of the message, I would greatly appreciate it! To cancel your subscription to TLL-announce, send a message with unsubscribe TLL-announce in the body of the message to the following address. URL: mailto:listserv@lcp.com Internet Servers (FTP, Gopher, and WWW). The Legal List is available via anonymous FTP, Gopher, and WWW: URL: URL: gopher://gopher.lcp.com URL: The InterNIC. The Legal List is one of many resources officially documented by the InterNIC Directory and Database Services maintained by the NSF Network Systems Center (NNSC) under a contract with AT&T. The Internet Resource Guide (IRG) (formerly compiled and maintained by BBN, Inc., for the NNSC) has been moved to the Directory of Directories provided by the InterNIC Directory and Database Services. In previous versions of The Legal List, I wrote [t]he [IRG] is invaluable, and everyone with a serious interest in the Internet should maintain a copy. The NNSC's stated goal is to expose users to those facilities that will help them do their work better. (Internet Resource Guide, Introduction, dated 16 Apr 90.) I wholeheartedly agree with this goal. Although the IRG in its 1990-form is being discontinued, the entries have been incorporated into the NNSC's new Directory of Directories. The Directory of Directories should prove to be an invaluable resource. For more information, contact: The InterNIC Directory and Database Services Administrator AT&T 5000 Hadley Road Room 1B13 South Plainfield, NJ 07080 Phone: 1-800-862-0677 E-mail: admin@ds.internic.net URL: mailto:admin@ds.internic.net URL: gopher://gopher.internic.net/ URL: Usenet FAQ. The Legal List is periodically posted as a FAQ (a file of Frequently- Asked Questions) to misc.legal, misc.legal.computing, misc.answers, and news.answers. It is also available (in about 10 parts) via e-mail and anonymous FTP from MIT's Usenet archives. To obtain a copy via e-mail from MIT, send a message with the following lines in it (there may be more than 10 parts) to mail-server@rtfm.mit.edu: send usenet-by-group/news.answers/law/net-resources/part1 send usenet-by-group/news.answers/law/net-resources/part2 send usenet-by-group/news.answers/law/net-resources/part3 send usenet-by-group/news.answers/law/net-resources/part4 send usenet-by-group/news.answers/law/net-resources/part5 send usenet-by-group/news.answers/law/net-resources/part6 send usenet-by-group/news.answers/law/net-resources/part7 send usenet-by-group/news.answers/law/net-resources/part8 send usenet-by-group/news.answers/law/net-resources/part9 send usenet-by-group/news.answers/law/net-resources/part10 quit URL: mailto:mail-server@rtfm.mit.edu URL: resources/ Paperback Copies. Paperback copies of The Legal List are available from Lawyers Cooperative Publishing. The paperback copies are superior in quality to the text-only versions distributed on the Internet (e.g. multiple fonts are used). The price for each copy is $29.95. The shipping and handling for each copy is $3.00 US, $4.00 Canada or Mexico, and $10.00 for all other countries. To receive a paperback copy of The Legal List, please send, e-mail, or fax a purchase order; or send a check or money order payable to Lawyers Cooperative Publishing to: Lawyers Cooperative Publishing Attn: The Legal List Aqueduct Building Rochester, NY 14694 USA Phone: 1-800-254-5274 Fax: 1-800-741-1414 E-mail: TLL-orders@lcp.com Please allow one to two weeks for delivery via United States Postal Services mail. Updates, Additions, and Corrections. Updates, additions, and corrections to The Legal List should be sent to legal-list@.lcp.com. URL: mailto:legal-list@lcp.com 0.2. About the Internet - A Brief Primer on the Internet. In the last few years, the Internet has become more user-friendly. Today, it can be a practical tool for the legal professional. 0.2.1. What Is the Internet? A computer network is simply two or more computers connected by wires. Computer networks allow interconnected users to share printers and files. When one network is connected with another, a internet (lowercase i) is formed. The Internet (uppercase I) is the international network of interconnected computer networks. Buzzwords like the information superhighway, cyberspace, and the national information infrastructure, which may be nicknames for the Internet or planned government or industry initiatives, are not helpful to understanding what the Internet is. Estimates of the number of individuals on the Internet vary widely, but it is safe to say there there are probably 50 million users worldwide. This makes the Internet the worlds second-largest communication network, after the telephone network. The Internet and the telephone network are not mutually-exclusive--many of the computers on the Internet are connected by various types of phone lines. Like the telephone network, it matters less to the end user how the technology works, and more how to use the technology. A notable difference between the Internet and the telephone network is that electronic mail (e-mail) sent to users outside of ones home country typically costs the same (at least for the end user) as e-mail sent to users within ones home country. As a result, individuals from all over the world can meet on the Internet in virtual communities, communities whose existence is fueled by low-cost Internet access. Like any other community, the Internet has rules of etiquette called netiquette. A quick summary of the rules of etiquette: Never say anything in an e-mail message (or a news posting) that you wouldn't say to the recipients face or that you wouldn't say in a long-distance phone call (i.e. realize that some users pay for incoming e-mail). The power to send e-mail--essentially instantly--to anyone in the world is great, and it should be understood. 0.2.2. Internet History - From Research to Prime Time. The Internet grew out ARPAnet (formed in 1969 as a product of the Advanced Research Project Agency), a network of government computers connected so that they could exchange information and use each others programs. ARPAnet was later discontinued, but other networks (primarily government and educational) had been formed and interconnected, and the resulting network of networks has come to be known as the Internet. The networks that are part of the Internet speak the same language, the TCP/IP (Transmission Control Protocol/Internet Protocol) protocols. Some of the computers on these networks themselves use the TCP/IP protocols (most notably UNIX-based computers) while others (for example, the computers that comprise the commercial online services such as CompuServe, America Online, and Delphi; as well as those computers on BITNET and UUCP networks) do not but are still able to use some TCP/IP protocols via gateways. In 1992, two significant events occurred. First, many of the restrictions on commercial use of the Internet were relaxed. Much of the Internet's traffic shifted from the National Science Foundations NSFNet backbone to commercial networks (such as the Commercial Internet Exchange, CIX). Second, and perhaps more significantly, we had a vice presidential candidate who had heard of the Internet--and who was interested in its potential. These two events resulted in a tremendous amount of coverage of the Internet in the popular press. In fact in 1993, there were more references to the Internet in The New York Times than in all previous years combined! And the trend is continuing. 0.2.3. How to Get On the Internet. As more people get on the Internet, fewer people will be able to ignore the Internet. Do you remember when you added your fax number to your business card? It may not be long until you add your Internet e-mail address as well. For those lawyers who want to communicate with their clients via the Internet (because there surely will be clients who want to do so) or who want to shape the future of the law of the Internet, now is the time to get on. Heres how. 0.2.3.1. Commercial Online Services. The quickest way to get on the Internet is to get an account on one of the commercial online services. Currently, the five largest national commercial online services are Prodigy, CompuServe, America Online, GEnie, and Delphi. Also, there are online services tailored specifically for the legal professional (such as Lexis Counsel Connect and Law Journal Extra). All of these services offer Internet e-mail, and several offer other Internet tools (discussed further below). Also, many offer free trial periods and home-access software (much like the Lexis and Westlaw software that you may already have). Call and ask for details (see the Appendix for addresses and phone numbers of commercial online services). For about $10-20 per month, you can ask questions and electronically look over peoples shoulders to learn about the Internet. 0.2.3.2. Reading about the Internet. Once you are on the Internet, it is relatively easy to find out more about the Internet itself. Your Internet provider most likely has Internet-related information available online. One source of information about the Internet available from numerous sites on the Internet is the Request For Comments (RFCs). The RFCs were originally electronic documents that were circulated for comments and that described a new protocol that was needed to help the computers connected to the Internet work together more effectively. Today, these documents are still referred to as RFCs because each is open for comment and subject change as the Internet evolves. Certain RFCs have remained unchanged for long periods of time and have become Internet standards. In addition to documenting standard protocols, the RFCs document the history of the Internet since 1969 and provide help and information for new Internet users. To receive introductory information on the Internet via e-mail, send a message with document-by-name rfc1594 in the body of the message to mailserv@ds.internic.net. You will receive RFC number 1594, Questions and Answers for New Internet Users. To receive an index of RFCs (there are about 1,800), include document-by-name rfc-index in the text of your message. The RFCs can be a road map (or a treasure map) for you if you enjoy exploring in this manner. If you'd rather have books by your side before you get on the Internet, you might want to get Brendan P. Kehoe's Zen and the Art of the Internet: A Beginners Guide to the Internet (Prentice-Hall, Englewood Cliffs, NJ), which is a brief, well-written, easy-to-read overview of the Internet. Also, you might want to pick up a copy of Ed Krol's The Whole Internet Users Guide and Catalog, Second Edition (O'Reilly & Associates, Inc., Sabastopol, CA), which is a comprehensive and clear guide to the Internet and is considered essential for new Internet users. Finally to learn more about netiquette, read Virginia Shea's Netiquette (Albion Books, San Francisco, CA), which documents the formerly-unwritten rules of Internet etiquette. 0.2.3.3. Beyond Dial-In Accounts. Consider registering your own Internet domain name (the part of an e- mail address to the right of the @ sign), rather than just having an individual account (the part of an e-mail address to the left of the @ sign) on somebody else's machine. This is more expensive than simply purchasing an account with a commercial online service, but there are inexpensive options (such as asynchronous dial-up PPP (Point to Point Protocol) and UUCP (Unix to Unix Copy Protocol) accounts), and you will gain flexibility and control. For example, you could set up your own FTP server, and your e-mail address would be yourname@your-company.com rather than yourname@somewhere-else.com. See the Appendix for a listing of some Internet domain providers. 0.2.4. A Brief Primer on Some Internet Tools. There are five Internet tools that you may want to use in your research: e-mail, FTP, Gopher, WWW, and WAIS. (Also, you may want to try a local BBS.) There is nothing magic about these tools--they are simply computer programs (like WordPerfect) that implement standard sets of rules, called protocols. (For example, using control-V for paste is a protocol on Macintosh computer systems.) No matter what computer you use (whether a Macintosh, a DOS-based computer, minicomputer, or mainframe computer) these tools should all work essentially the same way. 0.2.4.1. Electronic Mail (E-mail) Overview. E-mail is a tool that allows one user on the Internet to send a message to another user on the Internet. An e-mail message may contain text or pictures and sound encoded as text, but most often it is plain text. The various e-mail programs are the most widely used of the Internet tools, since the Internet is primarily used for communication between users. Users can be human or can be automated e-mail programs. Some of these automated programs can send your e-mail message to a group of individuals interested in the same type of information. By redistributing your e-mail message in this way, the automated e-mail program creates a virtual community--a discussion group. The listserv family of automated programs allows individuals to subscribe to various lists (or discussion groups). The listserv program handles all the administrative tasks (adding/deleting individuals from the subscription list; redistributing e-mail to all of the lists subscribers), leaving individual subscribers free to discuss substantive issues. Ill discuss some notable law-related listserv lists in Chapter 1. When people write a letter and send it from Maine to Finland via the United States Postal Services (USPS), they know that the to and from addresses must be written in a certain place, that mail may be returned if there is a problem, and that mail may be disposed of after sitting idly on the shelf of the post office (if, for example, both addresses are illegible). Internet e-mail works in much the same way. Some of the TCP/IP protocols deal with how to send, return, and dispose of e-mail. The advantages of Internet e-mail over USPS mail and telephone calls are numerous. Unlike with USPS mail, you do not have to find a stamp and drive to the nearest mailbox to send Internet e-mail. And unlike the telephone, Internet e-mail is never (well, almost never) busy. One winter, I planned a ski trip in Maine entirely by e-mail. I was able to make sure that each person got the same information, I could keep track of RSVPs, and I did not have to worry about making phone calls. 0.2.4.2. File Transfer Protocol (FTP) Overview. FTP is a tool that allows users on one computer (the local computer) to connect to another computer (the remote computer) for the limited purpose of copying files from (and sometimes to) the remote computer. A computer that is set up to accept incoming FTP requests from another computer is called an FTP server. Usually, the administrators of an FTP server will copy certain files to a public directory on the FTP server. In this way, information is made available to the Internet community. An FTP server is like a bulletin board. The owner of the FTP server can add and delete files from the public directory on the server server just as notices can be physically tacked to (and removed from) a bulletin board. 0.2.4.2.1. FTPMail (FTP via E-mail). Many resources are available via anonymous FTP. If you do not have access to FTP, but you do have access to e-mail, send a message with help in the body of the message to the following address. URL: mailto:ftpmail@decwrl.dec.com 0.2.4.2.2. FTPMail Example. For example, to get The Legal List via e-mail from the FTPMail service, send a message with the following text in the body of the message to the following address. The files will be e-mailed to you in a day or so. connect ascii get /pub/LegalList/legallist.txt quit URL: mailto:ftpmail@decwrl.dec.com 0.2.4.3. Gopher Overview. Gopher is named for the mascot of the University of Minnesota, where it was developed. Its a menu-driven program, much like an ATM machine at a bank. The Gopher server--a computer set up to run the program--is set up with a main menu and a series of submenus. When you select a particular menu item, you can view documents, run other Internet programs, or connect to another Gopher server. (By allowing one Gopher server to connect to another, Gopher allows users to look at menus and submenus from Gopher servers all over the world--so once you have connected to one Gopher server, you can connect to them all.) When you connect to another Gopher server, the Gopher program on your local computer connects to the Gopher program on the remote computer just long enough to copy the menu from the remote computer. This allows many Internet users to look at a particular Gopher menu at a given time. In this way, using the Gopher program is much like signing a book out of the library one page at a time--rather than tying up the pages that others may be waiting for. A well-organized Gopher server can make finding information on the Internet much easier. Various client versions of Gopher software are available via anonymous FTP: URL: Using a local client is faster, but there are also a number of public Telnet login sites available: URL: telnet://gopher@consultant.micro.umn.edu (North America) URL: telnet://gopher@ux1.cso.uiuc.edu (North America) URL: telnet://panda@panda.uiowa.edu (North America) URL: telnet://gopher@gopher.msu.edu (North America) URL: telnet://gopher@gopher.sunet.se (Europe) URL: telnet://info@info.anu.edu.au (Australia) URL: telnet://gopher@gopher.sunet.se (Sweden) URL: telnet://gopher@tolten.puc.cl (South America) URL: telnet://gopher@ecnet.ec (Ecuador) URL: telnet://gopher@gan.ncc.go.jp (Japan) For more information, contact the Gopher software developers: Internet Gopher Developers 100 Union St. SE #190 Minneapolis, MN 55455 URL: mailto:gopher@boombox.micro.umn.edu 0.2.4.3.1. GopherMail (Gopher via E-mail). Gopher is accessible via e-mail with GopherMail. To use GopherMail, send a message with help as the subject of the message to one of the following GopherMail servers (try to use a site near you). URL: mailto:gophermail@forestry.umn.edu (USA) URL: mailto:gophermail@calvin.edu (USA) URL: mailto:gopher@earn.net (France) URL: mailto:gophermail@ncc.go.jp (Japan) URL: mailto:gopher@dsv.su.se (Sweden) URL: mailto:gopher@earn.net (Europe) 0.2.4.3.2. VERONICA. VERONICA (Very Easy Rodent-Oriented Net-wide Index to Computerized Archives) is to GopherSpace what Archie, a program developed by the McGill School of Computer Science, is to the Internet's anonymous FTP archives. (For more information on Archie, see The Internet Resource Guide/Directory of Directories (see Section 0.1.3). VERONICA offers a keyword search of most of the Gopher-server menu titles in the world. To try VERONICA, select it from the Other Gophers menu on the University of Minnesota's Gopher server. 0.2.4.4. World-Wide Web (WWW) Overview. WWW is a distributed hypertext tool. If you have ever used HyperCard on the Macintosh or the help feature on Microsoft Windows, then you have used a hypertext system. More accurately, WWW (which was developed by CERN, the European Laboratory for Particle Physics) is a hyperMEDIA program because graphics and sound--in addition to text--can be displayed. A WWW server (a computer set up to run the WWW program) is like a deck of cards--you can skip from one location to another via links. Unlike Gopher, which presents you with a series of menu items, WWW presents the user with documents. Each document, like the menus in Gopher, can contain links, which often appear as bold or italicized text. When you select a particular link, you can view documents, run other Internet programs, or connect to another WWW server. The home page for a WWW server is analogous to the main menu for a Gopher server. To access the Web, you run a browser program that can read and retrieve documents. Mosaic is the most popular WWW browser program. The browsers can access information via/from FTP, Telnet, Usenet, Gopher, WAIS, and others. The following are some of the browsers accessible by Telnet (try to use sites near you): URL: telnet://www@ukanaix.cc.ukans.edu (US) URL: telnet://www@ (US) URL: telnet://info.cern.ch (Switzerland) URL: telnet://www@vms.huji.ac.il (Israel) URL: telnet://sun.uakom.cs (Slovakia) URL: telnet://info.funet.fi (Finland) 0.2.4.5. Wide Area Information Servers (WAIS) Overview. WAIS, the Wide Area Information Servers, is a networked full text information retrieval system developed by Thinking Machines, Apple Computer, and Dow Jones.. The WAIS software distribution is available via anonymous FTP: URL: If you are in Europe try the following first: URL: The easiest way to get started (if you do not have access to a WAIS client) is to try the WAIS at Thinking Machines: URL: telnet://wais@quake.think.com 0.2.4.5.1. WAISmail (WAIS via E-mail). If you do not have access to WAIS but you do have access to e-mail, you might want to try WAISmail, a WAIS via e-mail program. For more information on WAISmail, send a message with help as the subject of the message to the following address. URL: mailto:WAISmail@Think.COM With WAISmail, you can search WAIS sources and retrieve documents identified by your searches. Here is how the search and retrieve commands work: search [<source-name>|<source-name> <source-name> ...] {keywords...} Where <source-name> is a source name as found in the directory of servers (with or without the .src ending). If you use more than one source name and enclose them in quotes (as above), WAISmail will search both of the sources. If you try to search a nonexistent source, WAISmail will e-mail a list of sources to you. The following are some law-related WAIS sources that you may want to try: bit.listserv.pacs-l.src bush-speeches.src clinton-speechess.src computers-freedom-and-privacy.src cpsr.src directory-of-servers.src eff-talk.src ERIC-archive.src Eric-Digests.src eric-digests.src Health-Security-Act.src INFO.src Internet-user-glossary.src nafta.src NASA-directory-of-servers.src National-Performance-Review.src news.answers-faqs.src npr-library.src OSHA-Act.src OSHA-Field-Manual.src OSHA-Preamble.src OSHA-Standards.src OSHA-Tech-Manual.src patent.src rfcs.src SGML.src UNESCO-DARE-Social-Science-Institutes.src US-Budget-1993.src US-Congress-Phone-Fax.src US-State-Department-Travel-Advisories.src USHOUSE_congress_info.src Wests-Legal-Directory.src White-House-Papers.src world-factbook.src world-factbook93.src zipcodes.src retrieve <DOCID> Where <DOCID> is as returned by your search. 0.2.4.6. Bulletin Board System (BBS) Overview. There are approximately 50,000 BBSs nationwide, many of which are law- related. I have included only the essential information about these BBSs in Chapter 1, namely the phone number to call and a contact for more information. Most of the BBSs run 24 hours per day, many charge a fee, many are accessible at various baud rates. Your best bet is to read the introductory information carefully for each BBS. 0.2.5. Practical Uses of the Internet. The Internet offers a unique duality for the legal professional: communication and publication. 0.2.5.1. Communication via E-mail. Internet e-mail is nearly instantaneous, never (well, almost never) busy, and as easy as writing a letter. The recipient of an e-mail message can return (by cutting and pasting) portions of the senders original e-mail message with his/her response to provide the necessary context that is often lost in US mail or in phone messages. The power of the Internet as a means of communication cannot be understated. Last year, I sent about 10,000 e-mail messages, and I received about the same amount. This book was submitted via Internet e- mail. My clients, friends, and family are all on the Internet, and e- mail makes it easier for me to keep in touch with all of them. 0.2.5.2. Publication/Research via Internet Servers. As a means of publication, the Internet can be used for advertising, research, etc. Unlike Internet e-mail, which is primarily two-way communication, Internet publication (via FTP, Gopher, and WWW servers) is primarily one-way communication--from the publisher to the Internet community. The Internet publisher (which includes anybody who chooses to make information available on the Internet) can establish an FTP server, a Gopher server, and/or a World-Wide Web server. Organizations that are not yet prepared to respond to information requests via e-mail can still maintain a significant Internet presence by establishing such servers. On the Internet one can find primary law (cases, statutes, and treaties), secondary law (law review articles and the like), and tertiary law (discussion groups, unpublished manuscripts and the like). The key players in publishing law-related information on the Internet are law schools and government institutions. Since the Internet is a network of networks, with each network independently owned and operated, some of the information is easier to get than other. Ultimately, if the case, the statute, or the law review article that the Internet user seeks exists on the Internet, it exists as a file on a hard disk (or other storage medium) on a computer on a network somewhere on the Internet. It may exist in more that one location, and one locations version may be more up-to-date than anothers. 0.2.6. Who Else is on the Internet? Despite the growing popularity of the Internet as a means for communication, it has not yet achieved the same level of acceptance as the post office, the telephone, or the fax machine. While law firms regularly include postal addresses, phone numbers, and fax numbers on their business letters, and business cards, few include Internet addresses. Even in the academic community, where Internet access has been more common, the Internet hasn't risen to the level of the fax machine. Of the top 40 US law schools, Case Western Reserve University is the only school whose brochure specifically lists e-mail and WWW server addresses. 0.2.7. The Future of the Internet - Not Just for Scientists Anymore. Formerly used exclusively by government, military, and research users, the Internet is now being used by people in all lines of work. As more people get on the Internet, fewer people will be able to ignore the Internet. And as the Internet expands, there will be more legal issues (intellectual property, privacy, and First Amendment issues to name a few) to tackle. The Internet's ability to convey key information about a law firm, law school, or any organization is unique. As a means of communication, the Internet can supplement the phone, fax, and paper mail. As a means of publication, the Internet provides ways to research and advertise--as well as to shop and have fun. In my opinion, letterhead, fax leaders, business cards, and e-mail signatures--at least those for organizations- -should all contain US Postal Service addresses, phone numbers, fax numbers, and Internet addresses. Internet addresses can be either e-mail addresses (for two-way communication) or Gopher and WWW server addresses (for one-way publication). Law firms should be prepared to use all of the generally accepted means of communication. Your clients may want to have options. Like the fax machine, the Internet is here to stay. -- |||| Erik J. Heels, Lawyers Cooperative Publishing heels@lcp.com |||| c/o Counterpoint Publishing |||| 84 Sherman St. Fax: (617) 547-9064 |||| Cambridge, MA 02140 Phone: 1-800-998-4515 x3112 | http://www.faqs.org/faqs/law/net-resources/part02/ | crawl-002 | refinedweb | 6,277 | 55.24 |
With some slight modifications we can breeze through exercise 8. The instructions require us to use the natural <string> header for C++ string class objects. The key point here is that using string is simply just easier. Why you ask? Because we can now use ‘!=’ and other comparative operators to make our comparisons. See my solution below:
8. Write a program that matches the description of the program in Programming Exercise 7, but use a string class object instead of an array. Include the string header file and use a relational operator to make the comparison test.
#include <iostream> #include <string> using namespace std; int main() { string input; int words = 0; string compare= "done"; cout << "Enter words (to stop, type the word done):" << endl; cin >> input; while(input != compare) { cin >> input; words++; }; cout << "You entered a total of " << words << " words " << endl; cin.get(); return 0; }
Advertisements
This seems to meet the criteria for exercise 9, chapter 5. Exercise 8 requires the use of the cstring header file and strcmp() function.
“//exercise8.cpp — chapter 5.
//Michael R.
#include
#include
int main()
{
using namespace std;
char words[200];
int wcount = 0;
cout << "Enter words, separated by spaces. Use done to stop counting words. Press enter to complete counting procedure." <> words;
}
cout << "You entered a total of " << (wcount -1) << " words.";
return 0;
}
"
The previous was my solution. If anyone knows how I can get rid of the second to last line code "(wcount -1)", please inform me.
I think you meant ch5e7 and ch5e8. Exercise 9 is totally different. I can’t tell what was left out, but we do need to be using C++ string class this time and the word done to stop reading. What I was getting at with the solution for exercise 8 is that we can now just use comparative operators on the strings.
Ahh, I think the difference is the edition of the book.
Whoah, the vital parts of my code didn’t paste well into my last reply. Sorry about that. | https://rundata.wordpress.com/2012/11/21/c-primer-chapter-5-exercise-8/ | CC-MAIN-2017-26 | refinedweb | 335 | 76.01 |
Opened 13 years ago
Closed 12 years ago
#2280 closed defect (worksforme)
Models, that have not obligatory many-to-one relationships and have no related model set, are not listed.
Description
This error occurs on Django 0.95 ('post-magic-removal').
When model A relates with model B by many-to-one relationship and that relationship is not obligatory (blank=True is set), only these A-classed objects that have related B-classed objects being set, are displayed in the Django contributed administration on the list page.
A-classed objects, that have no not obligatory related B-classed objects set, are not listed in the Django contributed administration on the list page.
Change History (4)
comment:1 Changed 13 years ago by
comment:2 Changed 13 years ago by
OK. Here is a simple example. Let's say we have models Occupation and Person, where one Person can have one Occupation, and one Occupation can be taken by many People (Persons). Our models would look like this in the code:
from django.db import models class Occupation(models.Model): title = models.CharField(maxlength=200) class Admin: pass class Person(models.Model): occupation = ForeignKey(Occupation, blank=True) name = models.CharField(maxlength=200) class Admin: pass
Then we create Occupations
- "Programmer"
- "Manager"
- "Designer"
and Persons
- "Mr. Smith" who is a "Manager"
- "Tom Anderson" who is a "Programmer"
- "Trinity", who has no Occupation.
Now if we go to, we will get only "Mr. Smith" and "Tom Anderson" listed, because they have their Occupations set. And that is an error I am reporting.
comment:3 Changed 12 years ago by
I think you need to set
null=True as well in your ForeignKey
comment:4 Changed 12 years ago by
I have confirmed, without null=True leaving occupation blank raises a database error as it should, and with null=True the Person gets added just fine and shows on the changelist page.
Please clarify | https://code.djangoproject.com/ticket/2280 | CC-MAIN-2019-13 | refinedweb | 320 | 55.34 |
QGraphicsView::NoViewportUpdate doesn't work
I have a very simple window with a
QGraphicsView, a
QGraphicsSceneinside, and a simple
QPushButton. When user clicks button, a line should be added to the scene. However, since I set
QGraphicsView::NoViewportUpdate, the line shouldn't be displayed. On the opposite, the line gets displayed.
According to the documentation, QGraphicsView will never update its viewport when the scene changes; the user is expected to control all updates. This mode disables all (potentially slow) item visibility testing in QGraphicsView, and is suitable for scenes that either require a fixed frame rate, or where the viewport is otherwise updated externally.
How do I solve this problem?
Here is the code:
mainwindow.h
#ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QGraphicsScene> #include <QGraphicsView> #include <QWidget> #include <QPushButton> class MainWindow : public QWidget { Q_OBJECT public: MainWindow(QWidget *parent = 0); ~MainWindow(); private: QGraphicsView* view; QGraphicsScene* scene; QPushButton* b; public slots: void start(); }; #endif // MAINWINDOW_H
mainwindow.cpp
#include "mainwindow.h" #include <QVBoxLayout> MainWindow::MainWindow(QWidget *parent) : QWidget(parent) { scene = new QGraphicsScene(0, 0, 400, 400); view = new QGraphicsView(scene); view->setViewportUpdateMode(QGraphicsView::NoViewportUpdate); b = new QPushButton("Start"); connect (b, &QPushButton::clicked, this, &MainWindow::start); QVBoxLayout* layout = new QVBoxLayout; layout->addWidget(view); layout->addWidget(b); setLayout(layout); } MainWindow::~MainWindow() { } void MainWindow::start() { scene->addLine(0, 0, 200, 200); }
Hello,
I've tested your code (from the repository you provided in the other thread) and it the flag works as expected. You only get the view updated when a paint event is triggered from the window system (i.e. when resizing the widget). If you wish to stop the updates altogether you should filter out the events yourself (by installing an event filter for example) and issue the painting manually.
Kind regards.
@kshegunov Ok, thank you, now it's clear, I was confused since the documentation states that updates do not occur at all.
@alogim
Hello,
Actually it states that the viewport will not be updated when the scene changes, not that it will not repaint itself when it's resized. :)
QGraphicsView will never update its viewport when the scene changes;
Kind regards.
@kshegunov Actually it states that the viewport will not be updated when the scene changes, not that it will not repaint itself when it's resized. :)
Exactly: so if I change the scene by adding an element, that element shouldn't be displayed.
Instead, the element is displayed plus, if I resize the view or move the view's scrollbar, after a while it gets updated, and that's strange. It seems pretty useless...
@alogim
I've actually tested this (Debian 4.3 kernel, Qt 5.5.1) and I don't get the scene viewport updated. What I did is to comment out your call to
update()(in your source). When I start the application I see nothing. The view is updated with the current state of the scene when my mouse enters/leaves the
QGraphicsViewor when I resize/move the top level widget. If I do nothing I see nothing to change. It's possible this might be a bug (if you're using another version and it was subsequently fixed), but it works fine for me.
@kshegunov Right, right, you're right. Until you hover the view with the mouse, it doesn't get updated.
I'm on Arch, kernel 4.3.3, Qt 5.5.1.
So, if I install an eventFilter for the viewport perhaps I can compeltely prevent updates.
@alogim
Yes, you can, but then you'll need at some point to update the scene's viewport (the
QSceneView) manually and this might have some undesirable effects when you move/resize your widget(s). | https://forum.qt.io/topic/62360/qgraphicsview-noviewportupdate-doesn-t-work/5 | CC-MAIN-2018-13 | refinedweb | 608 | 55.54 |
[
]
Michael McCandless commented on LUCENE-831:
-------------------------------------------
{quote}
If we are going to allow random access, I like the idea of sticking
with the arrays. They are faster than hiding behind a method, and it
allows easier movement from the old API.
{quote}
I agree.
{quote}
It would be nice if we can
still deprecate all of that by backing it with the new impl (as done
with the old patch).
{quote}
That seems fine?
bq. The current API (from this patch) still looks fairly good to me - a given cachekey gets
your data, and knows how to construct it. You get data something like: return (byte[]) reader.getCachedData(new
ByteCacheKey(field, parser)). It could be improved, but it seems a good start to me.
Agreed.
bq. The immediate problem I see is how to handle multireader vs reader. Not being able to
treat them the same is a real pain. In the segment case, you just want an array back, in the
multi-segment perhaps an array of arrays? Or unsupported? I havn't thought of anything nice.
I would lean towards throwing UOE, and suggesting that you call
getSequentialReaders instead.
Eg with the new getUniqueTermCount() we do that.
bq. We have always been able to customize a lot of behavior with our custom sort types - I
guess the real issue is making the built in sort types customizable. So I guess we need someway
to say, use this "cachekey" for this built in type?
I don't quite follow that last sentence.
We'll have alot of customizability here, ie, if you want to change how
String is parsed to int, if you want to fully override how uninversion
works, etc. At first the core will only support uninversion as a
source of values, but once CSF is online that should be an alternate
pluggable source, presumably plugging in the same way that
customization would allow you to override uninversion.
bq. When we load the new caches in FieldComparator, can we count on those being segmentreaders?
We can Lucene wise, but not API wise right? Does that matter? I suppose its really tied in
with the multireader vs reader API.
Once getSequentialSubReaders() is called (and, recursively if needed),
then those "atomic" readers should be able to provide values. I guess
that's the contract we require of a given IndexReader impl?
> Complete overhaul of FieldCache API/Implementation
> --------------------------------------------------
>
> Key: LUCENE-831
> URL:
> Project: Lucene - Java
> Issue Type: Improvement
> Components: Search
> Reporter: Hoss Man
> | http://mail-archives.apache.org/mod_mbox/lucene-dev/200904.mbox/%3C671034817.1239374475279.JavaMail.jira@brutus%3E | CC-MAIN-2013-20 | refinedweb | 413 | 73.27 |
Refactoring Replace Parameter with Explicit Methods
ProblemA method is split into parts, each of which is run depending on the value of a parameter.
SolutionExtract the individual parts of the method into their own methods and call them instead of the original method.
void setValue(String name, int value) { if (name.equals("height")) { height = value; return; } if (name.equals("width")) { width = value; return; } Assert.shouldNeverReachHere(); }
void setHeight(int arg) { height = arg; } void setWidth(int arg) { width = arg; }
void SetValue(string name, int value) { if (name.Equals("height")) { height = value; return; } if (name.Equals("width")) { width = value; return; } Assert.Fail(); }
void SetHeight(int arg) { height = arg; } void SetWidth(int arg) { width = arg; }
function setValue($name, $value) { if ($name == "height") { $this->height = $value; return; } if ($name == "width")) { $this->width = $value; return; } assert("Should never reach here"); }
function setHeight($arg) { $this->height = $arg; } function setWidth($arg) { $this->width = $arg; }
def output(self, type): if name == "banner" # Print the banner. # ... if name == "info" # Print the info. # ...
def outputBanner(self): # Print the banner. # ... def outputInfo(self): # Print the info. # ...
Why Refactor
A method containing parameter-dependent variants has grown massive. Non-trivial code is run in each branch and new variants are added very rarely.
Benefits
- Improves code readability. It is much easier to understand the purpose of
startEngine()than
setValue("engineEnabled", true).
When Not to Use
- Do not replace a parameter with explicit methods if a method is rarely changed and new variants are not added inside it.
How to Refactor
For each variant of the method, create a separate method. Run these methods based on the value of a parameter in the main method.
Find all places where the original method is called. In these places, place a call for one of the new parameter-dependent variants.
When no calls to the original method remain, delete it. () | https://refactoring.guru/replace-parameter-with-explicit-methods | CC-MAIN-2017-17 | refinedweb | 304 | 58.89 |
In this tutorial, you will learn about c programming while and do while loop and how they are used in programs along with examples.
While and do while loop in c programming
Sometimes while writing programs we might need to repeat same code or task again and again.
For this C provides feature of looping which allows the certain block of code to be executed repeatedly unless or until some sort of condition is satisfied even though the code appears once in the program.
C programming supports 3 types of looping:
while loop in C
The while loop repeats the block of code until some sort of condition is satisfied.
For example:
while I have money in my account
keep shopping.
In this statement condition is:
" I have money in my account " and the task is
" keep shopping ". So until the condition is true shopping will be done repeatedly and when the condition becomes false task will be stopped.
Structure of while statement
while (condition) { //block of code to be executed }
How while loops work in C then?
As shown in the above structure a condition is placed which determines how many times the code is to be repeated.
Before entering inside the
while loop the condition is checked, and if it is true the code block inside
while loop is executed and again after the operation condition is checked and the repetition of code is continued until the condition becomes false.
Following flowchart explains more accurately the concept of while loop in C programming.
Example to highlight the concept of while loop in C.
Sample Program: C program to print the sum of first 5 natural numbers
#include <stdio.h> int main () { int sum = 0, i = 1; //initialization of counter variable i while(i <= 5) //loop to be repeated 5 times { sum = sum+i; i++; //increment of countervariable } printf("sum of first 5 natural numbers = %d",sum); return 0; } //end of program
Output
sum of first 5 natural number = 15
ishould be initialized before while loop otherwise compiler will report an error and if you forget to increase/decrease the counter variable used in condition, the loop will repeat forever and there will not be any output.
do while loop in C
do..while is a variant of while loop but it is exit controlled, whereas,
while loop was entry controlled.
Exit controlled means unlike
while loop in
do..while first the code inside the loop will be executed and then the condition is checked.
In this way even if the condition is false the code inside the loop will be executed once which doesn’t happen in while.
Syntax of do while loop
do { //block of code to be executed } while (condition);
Flowchart of do while loop
Example: Do while loop
C program to print sum of first 5 natural numbers using do..while loop
#include <stdio.h> int main () { int sum = 0, i = 1; //initialization of counter variable i do { sum = sum+i; i++; //increment of counter variable }while(i <= 5); //coondition of do while printf("sum of first 5 natural numbers = %d",sum); return 0; } //end of program
Output
sum of first 5 natural numbers = 15 | http://www.trytoprogram.com/c-programming/c-programming-while-and-do-while-loop/ | CC-MAIN-2020-16 | refinedweb | 530 | 64.64 |
Django Forms are a way to accept user input as text, images, or files from the web frontend.
The straightforward example of forms that we came across was the Django admin site’s login page. Admin site took input text “username” and an input text “password” from us.
There are two ways to use forms on our website;
- One using the <form> attribute in HTML Template files
- Using the Django Forms Model class.
We will learn just the basics of HTML forms to know what they are. Our primary focus will be Django forms itself.
Creating HTML forms for Django
We can create forms in HTML itself using the <form> attribute and get information from client using <input> attribute. The syntax for a typical HTML form is given below:
<form action="</action/>" method="post"> <label for="element_id">Your name: </label> <input id="element_id" type="text" name="<name>" value="<pre-set_value>"> <input type="submit" value="OK"> </form>
Let’s understand what the above code means:
- Action: This tells HTML where to send the submitted form data. It usually contains the URL we want to send the data to
- Method=”POST” This is a particular Method that we use while sending information to the server. There is a GET method as well, which we will need in this article.
- label for: This label gives a name to identify that particular label. For eg: <label for =’fname’>First Name:</label> Here we are giving a short name fname to identify the label “First Name.”
- <input id=, type=, name=, value= >: This input attribute is the most important in the HTML form, Input specifies the form field that we will take from the client—for example, the type, name, pre-set value assigned to it, etc.
- <input type=” submit”: This input submits the Form client entered.
Instead of using the <input> attribute to create form fields, we can use Django forms, which is a much efficient way. But before that, we need to learn more about GET and POST methods.
When to use GET and when to use POST
By default, the browser uses the GET method to request resources from the server.
For example, continuing from our books model example, we automatically use GET requests in the backend to pull the books’ data. Since we aren’t modifying the list on the front-end, this method works perfectly fine.
But let’s say if we want to add a book into the model DB. Then we are basically changing the DB elements, and hence then we require the POST method. Therefore, the POST method sends some information to the server.
When we changed the information regarding a Book or were adding a Book in the Django admin site, we used the POST method.
And when we were just looking at the list of books under BookModel in the admin site, we used the GET method.
There are other HTTP methods apart from this as well, which will learn in the REST API framework article.
Leveraging Django Forms
The working of Django forms is similar to that of Django Models. We create a form class and save them in a separate forms.py file.
The only difference between models and forms is that in models, we map model fields to the database fields while in forms, we map the form fields to the HTML form <input> elements.
Another interesting fact about forms is that we display blank forms on the website and take information from the client to store it, while with models, we show the stored data from the database to the client.
Creating forms.py python file inside the Django app.
Inside the Django app, create a new python file and name it forms.py
Creating a SearchForm to search for Book from the Book_website
We will now create a simple form that will take the name of the book as input and then redirect us to that book’s website. Hence let’s get started.
1. Creating a SearchForm in forms.py
Inside forms.py first we need to import forms library.
from django.forms import forms
After that, include the following code to create a SearchForm with a book_title field.
class SearchForm(forms.Form): book_title = forms.CharField(label = "book_title",max_length =80)
The syntax is similar to that of a model including max_length.
The label here has the same function as that of the label we learned in HTML forms.
2. Creating a SearchBookView in views.py
In views.py, create a function View with the name SearchBookView.
Now there can be two ways possible:
- The client reaches the webpage using a GET method.
- This will happen when the client opens the webpage for the first time, or else wants to search for another book.
- The client reaches the webpage using a POST method.
- This will happen when the client enters the book name and then presses the submit/search button.
Therefore the View will have to tackle both these situations.
1. Code for the GET method
When the client uses the GET method, he must get a blank form to enter the book name.
Thus, in this case, Our code will simply have the code.
form = SearchForm()
Just like models, we create a new form object and will pass it on to the HTML file.
2. Code for the POST method
When the client uses the POST method, he will be redirected to the Book webpage that we created in our previous articles(books/<book_name>)
Therefore, the code to perform this task will})
Here
- form = SearchForm(request.POST) saves the information that the client entered into the created form object “form.“
- form.is_valid() checks if the information entered in the field is valid or not. i.e., e.g., whether we have entered email only in the EmailField or not.
- form.cleaned_data[‘book_title’]: This attribute of the form library automatically converts the information entered by the client into the correct python accepted format, and thus the name cleaned_data
- try and except block: This is called exceptional handling in python which you might have learned in Python Exceptional Handling
- If the Book Title that the client entered is present in the DB, then we will get the information about the book using
book = BookModel.objects.get(title = book_title)
- Otherwise, if the book does not exist, then we raise a Http404 error, which is present in the Django.shortcuts library
- And once we save the information about the book from the DB, we use
HttpResponseRedirect("<url>",<context>)
This attribute redirects the client to the URL mentioned, along with the context dictionary.
Now that we have looked into the two parts of the SearchBookView, let’s combine them to get the complete final SearchBookview
from django.shortcuts import render,HttpResponse,HttpResponseRedirect,Http404 from .models import BookModel from .forms import SearchForm def SearchBookView(request): ={ 'form':form, } return render(request, 'books_website/SearchBook.html', context)
Therefore, if the request is POST, we are redirecting the user to /books/<book_title> URL or else if the client is using GET, we are simply showing him a blank form.
Don’t forget to import HttpResponseRedirect, Http404 from django.shortcuts and searchForm from forms.py
3. Creating the SearchBook.html template file in the templates folder
Since we have created a Django form, we do not have to create Input fields again for the book_title.
We just have to add the submit button in the form, and that’s it.
So let’s just create the HTML file.
<form method ='post'> {% csrf_token %} {{form}} <input type="submit" value = "Submit"> </form>
{% csrf_token %} that is the Cross-Site Request Forgery tokens protects against csrf attacks and hence used for security purposes for the forms.
4. Creating a URL endpoint for the SearchBookView in urls.py
Now, we will create a new URL path (book/search) for the SearchBookView we created.
We have learned in Django URL mapping, how to map a View to the URL, so let us do that here again.
path('book/search', SearchBookView, name='SearchBookView'),
That’s it, Now lets run the server
python manage.py runserver
Now, if you see, most of the webpages have their search buttons on the books web page(books/) itself. To do that, we need to combine the SearchBookView and BookView.
So just cut the code from SearchBookView and paste it in BookView. Then the BookView will look like this:
def BookView(request): books = BookModel.objects ={ 'books':books, 'form':form, } return render(request,'books_website/BookView.html', context)
Try to understand the code above and see how I have modified the searchBookView to include it in here.
Now here, since we have the search form in the web page below itself, we will include the SearchBook.html inside our BookView.html.
Now as SearchBook.html is a part of BookView.html, we can just render the BookView.html template itself (at the bottom) and remove the line
render(request, 'books_website/SearchBook.html',context)
That’s it; now we don’t even require the endpoint we just created. So delete the URL path (book/search).
Load up server and open browser
Hit submit and check
Creating Forms using ModelForm
If we want to save the form data into a DB table, then we need to create a Django model for that.
Django provides a way to link the information entered by the client through the form to the Model created to save the data.
Using ModelForm, we can efficiently perform the above task without writing much code. So let’s begin
Creating a Book-review form
We will create a review form on the book (books/<book_name>) webpage so that viewers can comment about the book.
1. Creating BookReviewModel in models.py
In models.py, create a new model BookReviewModel and write the required model fields as shown in the code below.
class BookReviewModel(models.Model): name = models.CharField(max_length = 80) review = models.TextField() class Meta: ordering = ['name'] def __str__(self): return f"comment by {self.name}"
Here, we are using Textfield, since the reviews can be long enough. This model is easy to understand since we learned this in Django Models article
2. Creating a Model form in forms.py
Now in the forms.py, create a form as shown.
class ReviewForm(forms.ModelForm): class Meta: model = BookReviewModel fields =('name','review',)
Here:
- Import BookReviewModel from .models
from .models import BookReviewModel
- Then we use Meta Class (which we learned about in Django Models) to include our Django Model and also to mention the fields that we want in the form
3. Creating BookReviewView in views.py
We will write a function view similar to the one we wrote while making a Search Form.
In Views.py, create a new function view BookReviewView and add the following code.
def BookReviewView(request): if request.method == 'POST': form = ReviewForm(request.POST) if form.is_valid(): form.save() return HttpResponse('Your review has been taken') else: form = ReviewForm() context = { 'form':form, } return render(request, 'books_website/ReviewBook.html', context)
Here:
- If form is valid, then we are simply using the save attribute to store the information entered by client into the DB.
See how simple it is to save a Form entry into the DB. we will now create the ReviewBook.html template file.
4. Creating ReviewBook.html template file.
In the templates/books_website, create a new file with the name ReviewBook.html
Just as we did above,we will create a form attribute.
<form method='post'> {% csrf_token %} {{form}} <input type="submit" value = "submit"> </form>
That’s it, our HTML file is ready
5. Creating the URL path to the BookReviewView
Now we just have to create a new path to the BookReviewView.
Go to urls.py and just add
path('book/review', BookReviewView, name='BookReviewView'),
Also don’t forget to register the BookReview Model in the admins.py
admin.site.register(BookReviewModel)
That’s it guys!! Lets run the server and go to (book/review) webpage.
And then press the submit button, you will see the Thank you for your response webpage.
Now if you go to the admin site, and check inside the BookReviewModel, you will see that the form entry is saved.
Conclusion
That’s all for the Django forms tutorial! We hope you have gained all the basics of Django forms and how they are linked with HTML forms. Also, you can learn more about the Django forms from the official documentation.
Stay tuned for more advanced tutorials on Django topics! | https://www.askpython.com/django/django-forms | CC-MAIN-2021-31 | refinedweb | 2,072 | 73.47 |
Log In UI - Part 4
Illustrates how to use the timeline and states to animate UI components.
Log In UI - Part 4 is the fourth in a series of tutorials that build on each other to illustrate how to use Qt Design Studio to create a simple UI with some basic UI components, such as pages, buttons, and entry fields. Part 4 describes how to use the timeline and states to animate UI components.
In Part 3, you learned how to use states to simulate page changes in a UI and connections to provide user interaction with it. In Part 4, you will now learn another way of animating the UI by using timeline animations that you bind to states.
These instructions build on:
The Learn Qt Quick sections provide additional information about the features of QML and Qt Quick that are relevant to the task at hand.
Animating UI Components
In Part 3, you changed the visibility property in different states to simulate changing pages. To make sure that those changes won't interfere with the changes to the opacity property you will make in Part 4, you will first remove the states.
Then, you will add a timeline and insert keyframes for the opacity property to hide the password verification field and back button on the login page and the login button on the registration page. Because we want the password verification field to appear to slide down from the password field, you will also need to insert a keyframe for its anchor margin property. To be able to animate the anchor, you also need to pull out the fields from the fields column and anchor them to the page and to each other instead.
To preview the changes that you make to the UI while you make them, select the
(Show Live Preview) button on the Form Editor toolbar or press Alt+P.
Replacing Columns with Anchors
First, you will prepare the page for adding animation:
- Open Screen01.ui.qml in Form Editor for editing.
- In the States view, select the
(Close) button in loginState and registerState to remove the states.
- Select the fields in fieldColumn in Navigator and drag and drop them to their parent rectangle to prepare for deleting the column component.
- Select fieldColumn in Navigator and press Delete to delete it.
- Select usernameField in Navigator.
- In Properties > Layout, select the
(Top) button to anchor the top of the field to the top of its parent. Qt Design Studio will suggest an appropriate margin based on the current position of the field on the y axis, 200 pixels.
- Select the
(Horizontal Center) button to anchor the horizontal center of the field to that of its parent.
- Select passwordField, and then select the Top button in Properties > Layout.
- In the Target field, select usernameField to anchor the top of passwordField to the bottom of usernameField with a 5-pixel margin.
- Select the Horizontal Center button to anchor the horizontal center of passwordField to that of usernameField.
- Repeat the above steps to anchor the top of verifyPasswordField to the bottom of passwordField with a 5-pixel margin and to anchor its horizontal center to that of passwordField.
- Select File > Save or press Ctrl+S to save your changes.
You could also animate the y-position property of the verify password field for a similar effect. In that case, you would need to use absolute positioning for the field. This is less flexible if you export your design from a design tool, such as Adobe Photoshop, and decide to change your design and export it again at some point. In that case, the margins would probably stay the same, even if the positions of the fields on the page would change.
Your page now should look something like this in the Design mode and live preview:
Adding a Timeline and Animation Settings
You are now ready to add the timeline. You will need two animations, one for moving into the registration page and another for returning to the login page. You can use the same animation for both cases, by running it either from the beginning to the end or from the end to the beginning.
To add a timeline with settings for running the animation:
- Select View > Views > Timeline to open the Timeline view.
- In Timeline, select
to add a 1000-frame timeline and settings for running the animation.
- In the Animation ID field, enter toLoginState.
- Deselect the Running in base state check box, because you want the animation to run only after the user clicks the Create Account button. You can use the default settings for the other fields.
next to the Animation Settings group to add settings for running the animation when the user clicks the back button.
- In the Animation ID field, enter toRegisterState.
- To run the animation backwards when the user clicks the back button, enter 1000 in the Start frame field and 0 in the End frame field.
- Select Close in the Timeline Settings view to save the timeline and the animation settings.
Next, you will record the animation in Timeline.
Inserting Keyframes
You will now insert keyframes and record property changes in Timeline:
- Select backButton in Navigator.
- In Properties > Opacity > Settings, select Insert Keyframe to insert a keyframe for the opacity property of the button.
- In Timeline, check that the playhead is in frame 0, and select the
(Per Property Recording) button for the opacity property of backButton to start recording property changes.
- In the field next to the opacity property name on that same line, type 0 to hide the button, and press Enter to save the value.
- Move the playhead to frame 1000 and change the opacity value to 1 to show the button.
To fine-tune the value of a keyframe, you can also right-click the keyframe marker
, and select Edit Keyframe.
- Select the record button again to stop recording property changes. If you forget this, all the following changes will be recorded, and the results will be unpredictable.
- Select verifyPasswordField in Navigator, and repeat the above steps to insert a keyframe for the opacity property of the field and to record changes for it.
- Select loginButton in Navigator, and repeat the above steps to insert a keyframe for the opacity property of the button and to record changes for it. However, this time the opacity value needs to be 1 in frame 0 and 0 in frame 1000.
- Select File > Save or press Ctrl+S to save your changes.
When you move the playhead along the timeline, you can see how the login button fades out while the verify password field and back button fade in.
You will now animate the top anchor margin of the verify password field to make it appear to slide down from the password field.
Animating Anchors
To animate the top anchor margin of the verify password field:
- Select verifyPasswordField in Navigator.
- Select Properties > Layout > Margin > Insert Keyframe to insert a keyframe for the top anchor margin of verifyPasswordField.
- In Timeline, check that the playhead is in frame 0, and select the record button for the anchors.topMargin property of verifyPasswordField.
- In the field next to the property, set a negative value for the top anchor margin, -40, to place verifyPasswordField on top of passwordField.
- Move the playhead to frame 1000 and change the top anchor margin to 5, so that, combined with the change in the opacity value, verifyPasswordField appears to slide down and settle below passwordField.
- Select the record button again to stop recording property changes.
- Select File > Save or press Ctrl+S to save your changes.
Adding Easing Curves
You will now add an easing curve to the anchor margin animation that will make the transition seem smoother:
- Click the keyframe marker
for the anchors.topMargin property at frame 1000 on the timeline to select it.
- Right-click the keyframe marker to open a context menu, and select Edit Easing Curve to add an easing curve to the animation.
- In Easing Curve Editor, select easeOutSine.
- Select OK to close the editor.
When you attach easing curves to keyframes, the shape of the keyframe marker changes from
to
.
Your timeline should now look something like this:
Next, you'll create states for the login and registration pages and bind them to the animation settings.
Binding Animation to States
You will now bring back the states in the States view and bind them to the animation settings in Timeline:
- In States, select Create New State twice to add two states called loginState and registerState. You don't need to make any property changes this time, because you'll bind the states to property animations.
- In Timeline, select the
(Timeline Settings (S)) button on the toolbar (or press S to open the Timeline Settings dialog.
- Double-click the cell in the Timeline column on the loginState row, and select timeline in the list.
- Double-click the cell in the Animation column on the loginState row, and select toRegisterState.
- Repeat these steps for registerState row, but select toLoginState in the Animation column.
- Click Close to save the timeline settings.
In the live preview, you can now click the Create Account button to go to the registration page and the back button to return to the login page.
Learn Qt Quick - Timeline
The Qt Quick Timeline module provides QML types to use timelines and keyframes to animate component properties in UIs. Animating properties enables their values to move through intermediate values instead of immediately changing to the target value.
The Keyframe type specifies the value of a keyframe on a timeline. Qt Design Studio automatically adds keyframes between two keyframes, and sets their values evenly to create an appearance of movement or transformation.
An easing curve can be attached to the keyframe to change the appearance of the animation. For more information about easing curve types, see the documentation for easing curves.
To be able to use the functionality of Qt Quick Timeline types, Qt Design Studio adds the following import statement to the QML files where it uses the types:
import QtQuick.Timeline 1.0
All the properties and functions of the QML types from this module are available in the Design mode, and therefore it is enough to learn how to use Timeline, as described in Creating Animations.
Next Steps
For more examples about using timelines, see Examples and Tutorials.
To watch a video tutorial about creating timelines and adding keyframes, select Learn to use Qt Design Studio Part 2 in the Tutorials tab in the Welcome mode.
Files:
- loginui4/PushButton.ui.qml
- loginui4/Screen01.ui.qml
- loginui4/imports/loginui4/Constants.qml
- loginui4/imports/loginui4/qmldir
- loginui4/loginui4.qml
- loginui4/loginui4.qmlproject
Images:
Available under certain Qt licenses.
Find out more. | https://doc.qt.io/qtdesignstudio/qt-design-studio-loginui4-example.html | CC-MAIN-2021-04 | refinedweb | 1,790 | 60.85 |
MQOpenQueue
Updated: July 19, 2016
Applies To: Windows 10, Windows 7, Windows 8, Windows 8.1, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server Technical Preview, Windows Vista
The MQOpenQueue function opens a queue for sending, peeking at, retrieving, or purging messages. The function can also be used to open a sub queue. For information about sub queues see Subqueues.
lpwcsFormatName
[in] Pointer to the format name string of the queue you want to open. The string can contain a single-element format name or, in MSMQ 3.0, a multiple-element format name. Single-element format names include public, private, direct, distribution list, machine, or connector format names. Multiple-element format names can contain one or more single-element format names.
dwAccess
[in] Specifies how the application accesses the queue (peek, send, or receive). This setting cannot be changed while the queue is open.
Specify one of the following access modes:
MQ_PEEK_ACCESS
Messages can only be looked at. They cannot be removed from the queue.
MQ_SEND_ACCESS
Messages can only be sent to the queue. A subqueue cannot be opened using this access mode.
MQ_MOVE_ACCESS
Can be used only with a subqueue. Requires the user to have peek permission for the queue.
MQ_RECEIVE_ACCESS
Messages can be retrieved (read and removed) from the queue, peeked at, or purged. Whether a message is removed from the queue in a call to MQReceiveMessage depends on the dwAction parameter of this function.
See the description of the dwShareMode parameter for information on limiting who can receive).
Note MQ_ADMIN_ACCESS is used to access messages in a local outgoing queue, rather than the corresponding remote destination queue.
dwShareMode
[in] How the queue will be shared. Specify one of the following:
MQ_DENY_NONE
Default. The queue is available to everyone. This setting must be used if dwAccess is set to MQ_SEND_ACCESS.
MQ_DENY_RECEIVE_SHARE
Limits those who can receive messages from the queue to this process. Once a process opens a queue with this share mode and with dwAccess set to MQ_RECEIVE_ACCESS, no one else, including the process that opened the queue, can open it again to peek or receive messages (this includes attempting to open the queue with multiple threads within the same process) until the original caller closes the queue. However, inside the process, the returned queue handle can be used by several threads.
Once the queue is opened with this share mode and with dwAccess set to MQ_RECEIVE_ACCESS, the MQ_ERROR_SHARING_VIOLATION error is returned when a second attempt is made to open the queue to peek or receive messages.
phQueue
[out] Pointer to a handle to the opened queue. If MQOpenQueue fails, a NULL pointer is returned.
Note The Access modes MQ_PEEK_ACCESS, MQ_RECEIVE_ACCESS and MQ_MOVE_ACCESS are the only access modes that can be used while opening a subqueue.
MQ_OK
Indicates success.
MQ_ERROR_ACCESS_DENIED (0xC00E0025)
The access rights for opening the queue with the access mode specified by dwAccess are not allowed for the calling process.
Note A user cannot open a subqueue for MQ_MOVE_ACCESS unless the user has MQSEC_PEEK_MESSAGE permission on the queue. If the user does not have this permission, the call to MQOpenQueue fails with MQ_ERROR_ACCESS_DENIED if the queue is local. If the queue is remote, the call fails with 0x80070005 (access denied).
MQ_ERROR_ILLEGAL_FORMATNAME (0xC00E001E)
The lpwcsFormatName parameter specified an illegal format name.
MQ_ERROR_INVALID_PARAMETER (0xC00E0006)
One of the IN parameters is not valid.
MQ_ERROR_NO_DS (0xC00E0013)
A connection with the directory service cannot be established. Verify permissions for accessing the directory service.
MQ_ERROR_QUEUE_NOT_FOUND (0xC00E0003)
Message Queuing cannot find the queue. The queue may be a public queue not registered in the directory service or an Internet queue that does not exist in the MSMQ namespace.
MQ_ERROR_REMOTE_MACHINE_NOT_AVAILABLE (0xC00E0069)
The remote computer that hosts the queue being opened for reading messages is not available.
MQ_ERROR_SERVICE_NOT_AVAILABLE (0xC00E000B)
The Message Queuing service is not available.
MQ_ERROR_SHARING_VIOLATION (0xC00E0009)
Another process already opened this queue with dwShareMode set to MQ_DENY_RECEIVE_SHARE, or another process has already opened the queue for receive so you can't specify MQ_DENY_RECEIVE_SHARE.
MQ_ERROR_UNSUPPORTED_ACCESS_MODE (0xC00E0045)
The access mode parameter (dwAccess) is set to an invalid value, or dwAccess is set to MQ_SEND_ACCESS and the share mode parameter (dwShareMode) is set to MQ_DENY_RECEIVE_SHARE.
MQ_ERROR_UNSUPPORTED_FORMATNAME_OPERATION (0xC00E0020)
The format name specified in the lpwcsFormatName parameter cannot be used.
Direct format names cannot be used if dwAccess is set to MQ_PEEK_ACCESS or MQ_RECEIVE_ACCESS. See the following Remarks section for details.
Format names that reference journal, dead-letter, or connector queues cannot be used if dwAccess is set to MQ_SEND_ACCESS.
The MQOpenQueue function can be used to open queues for sending or reading messages. When opening queues to send messages the application can specify a one queue or several queues. When opening a queue to read messages the application can only specify one queue.
To open multiple queues, the application can specify a distribution list format name or multiple-element format name in the lpwcsFormatName parameter.
The main difference between distribution lists and multiple element format names is that distribution lists are public lists that are published in Active Directory Domain Services (AD DS) and multiple-element format names are private lists that are created and maintained at the application level.
For information on distribution lists and multiple-element format names, see Multiple-Destination Messaging. resides. To read messages from a queue on a remote computer, there must be a direct connection between the two computers.
If the format name of the queue is unknown, see Obtaining Format Names.
For MSMQ 1.0 prior to Windows NT 4.0 SP6, you cannot use a direct format name to open a queue to read messages, and direct format names can only be used if dwAccess is set to MQ_SEND_ACCESS.
In all later versions of MSMQ, direct format names can be used for any queue when sending or receiving messages.
A direct format name prevents Message Queuing from using the directory service (for remote public queues) or the local computer (for private queues) to obtain routing information. When a direct format name is used to send a message, all routing information is derived from the format name, and Message Queuing sends the message to the queue in a single hop. Nevertheless, in any call to open a local public queue, Message Queuing always attempts to contact.
When an application opens a local outgoing queue to retrieve, peek at, or purge messages, the format name set in the lpwcsFormatName parameter must be exactly the same as the format name mode requested are not allowed for the calling application, the following two things can happen:
If dwAccess is set to MQ_SEND_ACCESS, MQOpenQueue will succeed, but errors will be returned when the application tries to send a message.
If dwAccess is set to MQ_PEEK_ACCESS or MQ_RECEIVE_ACCESS, MQOpenQueue will fail and return MQ_ERROR_ACCESS_DENIED (0xC00E0025). In this case a queue handle is not returned to phQueue.
To change the access rights of the queue, call MQSetQueueSecurity. The following table lists the access right needed to open the queue in peek, send, or receive access mode.
There is no provision to change the access mode of the queue when it is open. Either close and open the queue with the desired access mode, or open a second instance of the queue.
You cannot open a queue journal, computer journal, dead-letter queue or connector queue with dwAccess set to MQ_SEND_ACCESS. These queues are system queues and can be opened only with dwAccess set to MQ_PEEK_ACCESS or MQ_RECEIVE_ACCESS.
Foreign queues cannot be opened using a direct format name. Message Queuing needs the routing information stored in the directory service to find the appropriate connector server for the foreign queue.
Setting dwShareMode to MQ_DENY_RECEIVE_SHARE indicates that until the calling application calls MQCloseQueue, no other Message Queuing applications can open a queue with receive access. This includes applications that may be allowed the correct access rights to read messages from the queue.
For Windows NT, Windows® 2000, and newer versions of Windows, a queue handle is always inherited by a child process. If a child process is created by the process that opened the queue, the queue handle is inherited by the child process.
For Windows 95 (with IE302 or later installed), a queue handle is not inherited by a child process.
The following internal private queues are used by Message Queuing and cannot be opened by applications:
admin_queue$
order_queue$
notify_queue$
Equivalent COM Method
When using COM components, you can open a queue (create an open instance of the queue) for sending, peeking at, retrieving, or purging the messages in it by calling the MSMQQueueInfo.Open method.
The following code examples are included in Using Message Queuing.
Windows NT/2000/XP: Included in Windows NT 4.0 SP3 and later.
Windows 95/98/Me: Included in Windows 95 and later.
Header: Declared in Mq.h.
Library: Use Mqrt.lib.
Message Queuing Functions
MQCloseQueue
MQReceiveMessage
MQSetQueueSecurity | https://msdn.microsoft.com/en-us/library/ms699817(v=vs.85).aspx | CC-MAIN-2018-34 | refinedweb | 1,476 | 55.13 |
#include "async_fetch_with_lock.h"
AsyncFetch object which tries to acquire lock before fetching content. Start() will returns false, if it fails to acquire lock. Note that acquiring a lock will fail if same resource is fetching somewhere else. Caller will call the Start() which will try to acquire a lock and internally call StartFetch() which actually triggers a fetch. Sequence of the events: 1) Caller calls AsyncFetchWithLock::Start(). 2) Start() will try to acquire lock. If lock is acquired successfully, AsyncFetchWithLock::StartFetch() will be called, otherwise AsyncFetchWithLock::Finalize() is called with lock_failure as true and success as false and StartFetch() returns false and async_fetch_with_lock object will be deleted. Note: StartFetch() will be called in case of lock failure only if ShouldYieldToRedundantFetchInProgress() returns false. 3) Subclass defines StartFetch() function which actually triggers UrlAsyncFetcher::Fetch(). 4) Subclass can override HandleHeadersComplete(), HandleWrite(), HandleFlush() and HandleDone() for special handling during fetch. HandleDone() also releases the lock. Note: If any of these functions is overridden, then AsyncFetchWithLock::HandleXXX should also be called. 5) Lastly AsyncFetchWithLock::Finalize() is called just before async_fetch delete itself.
Finalize is called either when we fail to acquire acquire a lock or at the end of request after releasing the lock.
Releases the lock. If subclass overrides the function, then, it should also call AsyncFetchWithLock::HandleDone()
Implements net_instaweb::AsyncFetch.
HandleHeadersComplete(), HandleWrite() and HandleFlush() are no-op functions and any special handling can be done in subclass and must call the superclass function before returning.
Implements net_instaweb::AsyncFetch.
If someone is already fetching this resource, should we yield to them and try again later? If so, return true. Otherwise, if we must fetch the resource regardless, return false.
This will first try to acquire lock and triggers fetch by calling StartFetch() if successful. calls Finalize(true, false), if it fails to acquire lock, and deletes this.
StartFetch() will be called after the lock is acquired. The subclass implements this function and is responsible for UrlAsyncFetcher::Fetch(). | http://modpagespeed.com/psol/classnet__instaweb_1_1AsyncFetchWithLock.html | CC-MAIN-2017-09 | refinedweb | 323 | 58.38 |
19 December 2011 05:27 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The companies started building the first phase of the project on 28 November, according to the source, who added that the total investment for the project is yuan (CNY) 2bn ($315m).
The project has three phases of construction. The first phase comprises a 80,000 tonne/year fatty alcohol plant and a 120,000 tonne/year fatty alcohol-polyoxyethylene ether plant, to be brought on stream at the end of 2013, according to the source.
The second phase comprises a 120,000 tonne/year fatty alcohol polyoxyethylene ether sodium sulfate plant, which will be brought on stream at the end of 2014, the source added.
The third phase comprises a 80,000 fatty alcohol plant, a 120,000 tonne/year fatty alcohol-polyoxyethylene ether plant and a 120,000 tonne/year fatty alcohol polyoxyethylene ether sodium sulfate plant, which will be started up at the end of 2016, the source said.
Upon completion of the project, Taixing city will be
Non-ionic surfactants, which China imports heavily, are widely used in the manufacture of textiles, paper, food and medicine.
( | http://www.icis.com/Articles/2011/12/19/9517485/chinese-joint-venture-to-start-up-non-ionic-surfactant-project.html | CC-MAIN-2014-35 | refinedweb | 191 | 54.66 |
imona
shimona
- 99%Jobs udført
- 100%Indenfor Budget
- 100%Til Tiden
- 25%Genansættelsesfrekvens
Portfolio
Nylige bedømmelser
PHP: online CSV to DB upload + simple search form $90.00 USD
“She has been excellent in understanding our needs and situation. She knows how to make it smooth, she is friendly and intelligent. Don't hesitate to hire her, whether it's a programming job or a writing job. Hope we'll work again soon Shimona.”ruiLouis
7 år siden
import script php MySql $45.00 USD
“It`s very easy to work with shimona. Good communication. Did what I expected on time and on budget. Shimona did extra work to help me solve problems, even after the job was done. Even this was a small job, it shows very good knowledge and I can highly recommend shimona. We will use shimona again for sure.”finng
7 år siden
Shimona-16-Dec-2009 $150.00 USD
“Another excellently completed project! I am thrilled to have worked on so many projects together until now. No disappointments and no flaws. Perfectly done! Thank you!”Ion Gabriel T.
Dec 22, 2009
Shimona - 09-dec-2009 $75.00 USD
“Wonderful collaboration! I appreciate this writer very much and recommend her writing skills to everyone! I am confident her work will be very appreciated by anyone who hires her to write good, quality articles.”Ion Gabriel T.
Dec 15, 2009
Shimona -3-Dec-2009 $95.00 USD
“I love to work with Shimona! She has great writting skills and she is very understanding. The communication has always been very good and the articles have always impressed me. I am really pleased with her!”Ion Gabriel T.
Dec 10, 2009
Shimona-14-Nov-2009 $80.00 USD
“Super! Thrilled again with what she wrote and very happy with her punctuality! She can be sure I will come back for more collaborations!”Ion Gabriel T.
Dec 9, 2009
Verifikationer
- Facebook forbundet—
- Foretrukken Freelancer—
- Betalingsverificeret—
- Telefon Verificeret
- Identitet verificeret—
Mine vigtigste færdigheder
- Copywriting 59
- SEO 4
- PHP 3
- MySQL 3
- Article Writing 1 | https://www.dk.freelancer.com/u/shimona | CC-MAIN-2018-51 | refinedweb | 342 | 70.29 |
Automating the world one-liner at a time.
In my previous post, I showed you how to create “Hello World” scripts using Windows Presentation Foundation (WPF) and Windows PowerShell.
While “Hello World” is relatively easy to write with WPF, it is only the tip of the iceberg of the types of quick user interfaces you can write.
Windows Presentation Foundation provides an amazing array of controls for an incredible array of purposes. Out of the box, with just one control, you can:
· Show a listbox
· Play a video or audio file
· Capture user drawings
· Display an image
· Draw a complex polygon
· Display a slider
All controls can use video, images, gradients, and rich colors as their background or foreground. All controls can interact with Tablet PC input, Keyboard, and Mouse. All windows can be transparent.
In the vastness of WPF, there is not a Get-Command or a Get-Help to help you discover what you can do with WPF. The existence of Get-Command and Get-Help is one of my favorite things about PowerShell, because it helps close what I call the Discoverability Gap. The Discoverability Gap is the difficulty in a scripter of developer determining what solutions exist for a problem.
While PowerShell has an elegant solution to the Discoverability Gap, there have been many good attempts in the past. .NET’s is refection. In this post I’ll give you a couple of functions that help close the Discoverability Gap for .NET, and then show you how to find examples on MSDN.
You’ll need three functions for the fun. They’re all one liners.
# Looks returns all of the .NET types currently loaded by PowerShell
function Get-Type() { [AppDomain]::CurrentDomain.GetAssemblies() | % { $_.GetTypes() }}
# Opens a webpage to connect to look up information about the Type on MSDN (e.g )
function Get-MSDNInfo([Type]$t) { (New-Object –com Shell.Application).Open(“” ) }
# Create a new instance of an object and displays member info with Out-GridView, so you can search the information to find a property that might do what you want
function Show-ClassInfo([Type]$t) { Get-Member –input (New-Object $t.FullName $args) | Out-Gridview}
With these commands, I can close a of the discoverability gap for WFP & all of .NET, much more quickly.
Now let's walk through how we use these commands to find out what else is in WPF, and show some more quick WPF & PowerShell samples.
WPF is in the System.Windows namespace and subnamespaces, and all controls are inherited from [Windows.Controls.Control], so you can quickly find all of the loaded controls with this one liner:
Get-Type | Where-Object { $_.IsSubclassOf([Windows.Controls.Control])}
First, let’s find a label, so we can change the font size of the Hello World.
Get-Type | Where-Object { $_.IsSubclassOf([Windows.Controls.Control])} | Where-Object {$_.Name –eq “Label”} | Select FullName
FullName
--------
System.Windows.Controls.Label
Now, let’s go open it in MSDN and create a gridview containing a label:
Get-MSDNInfo System.Windows.Controls.Label
Show-ClassInfo System.Windows.Controls.Label
The MSDN page gives you the details on everything that is just applicable to the label, but Get-Member gives you every property, method, and event the control has.
A quick scrolling down this list will give you an idea of just how big the iceberg is. The label alone has 246 methods, properties, and events.
Luckily for us, Out-Gridview has a search window. Let’s use it to find the properties named Size.
There’s
An Event, SizeChanged
A method, Measure, which takes a Size type
A DesiredSize Property
A FontSize Property
A RenderSize Property
Obviously, FontSize is the one we want to use to make our Hello World a little larger and easier to read.
Now our HelloWorld is:
$window = New-Object Windows.Window
$window.Title = “Hello World”
$label = New-Object Windows.Controls.Label
$label.Content, $label.FontSize = “Hello World”, 24
$window.Content = $label
$window.SizeToContent = “WidthAndHeight”
$null = $window.ShowDialog()
Let’s take a quick tour of some of the other really simple things we can do with WPF:
Create a Circle of a Random Size:
$color = (“Red”, “Green”,”Blue”,”Yellow” | Get-Random)
$window.Title = “See The Big $color Ball”
$circle = New-Object Windows.Shapes.Ellipse
$circle.Width = $circle.Height = Get-Random –min 200 –max 450
$circle.Fill = $color
$window.Content = $circle
$window.SizeToContent = “WidthAndHeight”
$null = $window.ShowDialog()
Create an Ink Canvas the user can scribble on with the mouse or stylus
$window = New-Object Windows.Window
$window.Title = “Scribble on Me”
$inkCanvas = New-Object Windows.Controls.InkCanvas
$inkCanvas.MinWidth = $inkCanvas.MinHeight = 100
$window.Content = $inkCanvas
$window.SizeToContent = “WidthAndHeight”
$null = $window.ShowDialog()
Show a slider, and get the value the slider was at after running:
$window = new-object Windows.Window
$slider = New-Object Windows.Controls.Slider
$slider.Maximum = 10
$slider.Minimum = 0
$window.Content = $slider
$window.SizeToContent = "WidthAndHeight"
$slider.Value
Show a label and textbox, and emit the value the textbox contained:
$stackPanel = new-object Windows.Controls.StackPanel
$text = New-Object Windows.Controls.TextBox
$label.Content = "Type Something"
$stackPanel.Children.Add($label)
$stackPanel.Children.Add($text)
$window.Content = $stackPanel
$text.Text
This post should give you a better sample of what WPF Contains, and how to close the Discoverability Gap and learn how to script more. Stay tuned to see more interactive WPF.
Hope this Helps,
James Brundage [MSFT]
If you would like to receive an email when updates are made to this post, please register here
RSS
As an alternative to creating a COM object to open the URL, I use this function to create a .url file and then pass that to Invoke-Item cmdlet.
New-UrlFile code listing:
I love your series so far! In fact, I loved it so much I added a -STA switch to PowerShell Plus. So now you can use PS+ with full intellisense and debugging for WPF scripts as well. -STA support was introduced in version 1.0.4.5 which you can get here if you like:.
This version is not yet made public except for the link above. Enjoy!
There's a simple yet powerful function that nearly everyone on the PowerShell team has written a version | http://blogs.msdn.com/powershell/archive/2008/05/23/wpf-powershell-part-2-exploring-wpf-and-the-rest-of-net-with-scripts.aspx | crawl-002 | refinedweb | 1,032 | 59.4 |
RAII wrapper on stdio.h FILE pointers (use a derived class though). More...
#include <l_stdio_wrap.h>
RAII wrapper on stdio.h FILE pointers (use a derived class though).
If you have the misfortune of needing to use stdio.h routines for file IO, consider using one of the derived classes of this class as an RAII wrapper on the pointer object. When done correctly, it means never having to call the close function yourself, because the dtor will do it for you.
This is a base class not meant to be instantiated directly. Instead, use one of the following derived classes (list not necessarily exhaustive):
ctor needs the pathname and a mode; mode is set by derived class
default ctor wraps a null pointer (to set up a later swap maybe)
close file before destruction; rarely needed; safe to do twice.
Reimplemented in kjb::Temporary_File.
transparently use this object wherever a FILE* is used!
swap the contents of two wrappers
this is what we do when things go terribly wrong | http://kobus.ca/research/resources/doc/doxygen/classkjb_1_1File__Ptr.html | CC-MAIN-2022-21 | refinedweb | 170 | 74.39 |
The video, notebooks, spreadsheets, and links are available here.
Lesson 4 discussion
Embedding - In lesson4, under the Dot Product section, Embedding is called to try to create an embedding layer.
u = Embedding(n_users, n_factors, input_length=1, W_regularizer=l2(1e-5))(user_in)
It gives me a NameError and I am not sure how to fix it. I don’t know if it is a setup problem on my side (I set up my own server) or where I should be looking to fix this problem. I did a %whos and Embedding is not in my environment. Any suggestions on where I should go from here …
-=-=-
user_in = Input(shape=(1,), dtype=‘int64’, name=‘user_in’)
u = Embedding(n_users, n_factors, input_length=1, W_regularizer=l2(1e-5))(user_in)
movie_in = Input(shape=(1,), dtype=‘int64’, name=‘movie_in’)
m = Embedding(n_movies, n_factors, input_length=1, W_regularizer=l2(1e-5))(movie_in)
NameError Traceback (most recent call last)
in ()
1 user_in = Input(shape=(1,), dtype=‘int64’, name=‘user_in’)
----> 2 u = Embedding(n_users, n_factors, input_length=1, W_regularizer=l2(1e-5))(user_in)
3 movie_in = Input(shape=(1,), dtype=‘int64’, name=‘movie_in’)
4 m = Embedding(n_movies, n_factors, input_length=1, W_regularizer=l2(1e-5))(movie_in)
NameError: name ‘Embedding’ is not defined
-=-=-
%whos
Variable Type Data/Info
Adam type <class ‘keras.optimizers.Adam’>
BatchNormalization type <class ‘keras.layers.norm<…>tion.BatchNormalization’>
Convolution2D type <class ‘keras.layers.conv<…>olutional.Convolution2D’>
Dense type <class ‘keras.layers.core.Dense’>
Dropout type <class ‘keras.layers.core.Dropout’>
Flatten type <class ‘keras.layers.core.Flatten’>
GlobalAveragePooling2D type <class ‘keras.layers.pool<…>.GlobalAveragePooling2D’>
Image module <module ‘PIL.Image’ from <…>-packages/PIL/Image.pyc’>
Input function <function Input at 0x7f7aba5652a8>
K module <module ‘keras.backend’ f<…>as/backend/init.pyc’>
.
.
.
Thanks for the question. According to the docs, the full name is “keras.layers.embeddings.Embedding”. Therefore you can either refer to it by its full name everywhere you use it (which you probably don’t want to do!), or you can add to the top of your notebook:
from keras.layers.embeddings import Embedding
Or you can add that line to utils.py and reload it.
Here’s some more information about how python handles this.
Thanks, Jeremy. That worked. Glad its not a sign of something inherently wrong with my setup.
Lesson 4 reminded me of a “Factor analysis” I saw in a recent paper:
Source:
Is this not very similar to the Koren et al chart @rachel posted in Slack?
Somewhat similar, yes. Factor analysis / principal components analysis are linear methods that create a lower dimensional matrix that attempts to capture the variance in the original matrix, just like our model did. If you want to read more, search for ‘PCA’ and ‘factor analysis’ for the classic methods like you show, and ‘probabilistic matrix factorization’ for the approach we used in class.
I tried the lesson 4 notebook using my own data:
30,746 total features taken from my database of blood tests, urinary organic acids and hormone metabolites. 447 unique users and 208 unique markers. I normalised everything using
sklearn.preprocessing.scale.
I reduced the number of latent factors to 5 and he rest of the notebook is as you presented it.
val_loss: 0.9314
I’m very excited about the idea of being able to describe my users with just five latent factors. I’m even more excited about scatter plotting the markers using their top latent factors because I think the plot will make physiological sense in the same way as the scatter plot for movies makes sense.
My question is, where did the latent factors in the Keras model go?
model.fit([trn.userId, trn.marker], trn.result, batch_size=64, nb_epoch=10, validation_data=([val.userId, val.marker], val.result)) trn.result.shape (24466,)
The users and markers were concatenated to form the input, do I need to split them back up to get my latent factors?
Did you notice who he is? “Chief Architect at Elsevier, the world’s leading scientific publisher.”
@jeremy By the method of collaborative filtering, we need to predict the missing ratings right?
In the spreadsheet case: MovieId : 49 UserId: 212
result is 0.0 in the spreadsheet. Are we not supposed to predict this unknown rating, am I missing anything here ?
At training time (which is shown in the spreadsheets) we predict the rating for those which are labeled, so that we can compare to the true labels and calculate the value of the loss function. So we don’t predict the rating you mention, since we don’t have a label for it in the training set.
Just curious, how do we predict MovieId : 49 UserId: 212 value at test time ?
At this point: did @jeremy by any chance mean sigmoid is the activation function as opposed to loss function?
@jeremy I was able to solve the book-crossing dataset recommendation using similar architecture as taught in the class.
Below are results on a sample of 60k data points. Need to tune still since val loss is increasing…
Train on 48036 samples, validate on 11962 samples
Epoch 1/5
48036/48036 [==============================] - 34s - loss: 14.5920 - val_loss: 12.4964
Epoch 2/5
48036/48036 [==============================] - 46s - loss: 8.5997 - val_loss: 13.8392
Epoch 3/5
48036/48036 [==============================] - 78s - loss: 5.2262 - val_loss: 14.6633
Epoch 4/5
48036/48036 [==============================] - 78s - loss: 4.3011 - val_loss: 14.7733
Epoch 5/5
48036/48036 [==============================] - 79s - loss: 3.8658 - val_loss: 14.7772
Out[19]:
Hi All,
My buddy pointed me to this course a couple weeks ago, and I’m now totally obsessed with deep learning & neural networks. Thanks to everybody working on this course!
Anyway, I’m working through the homework, and I have a practical question about the embeddings: once a model is trained and set up, how do you add new users and new movies to the database?
Since the embeddings take as an input the number of users and number of movies as inputs (and create parameters accordingly), it seems the model is locked down to those movies and users.
Are there clever ways to modify the layers to include extra movies and users, or do you need to retrain from scratch every time you add new data?
Thanks,
-Caleb | http://forums.fast.ai/t/lesson-4-discussion/210 | CC-MAIN-2018-13 | refinedweb | 1,029 | 65.52 |
The Lodge is members-only design/dev videos and Office Hours.
Next Office Hours Session: "Implementing an SVG Icon System" Nov 30 - 6:00 PM Eastern
The Lodge is members-only design/dev videos and Office Hours.
innerHTML() is native and returns the contents of a DOM node (e.g.
<span>I live inside a div.</span>. outerHTML() is not, which would include the current DOM node (e.g.
<div><span>I live inside a div.</span></div>). This is a chain-able jQuery version of doing that.
$.fn.outerHTML = function(){ // IE, Chrome & Safari will comply with the non-standard outerHTML, all others (FF) will have a fall-back for cloning return (!this.length) ? this : (this[0].outerHTML || ( function(el){ var div = document.createElement('div'); div.appendChild(el.cloneNode(true)); var contents = div.innerHTML; div = null; return contents; })(this[0])); }
Am I missing something, seems like this is unnecessarily overcomplicating things, why couldn’t it be written like: ?
The snippet on this page returns a chain-able jQuery object.
That’s not true, the only time this would return a chainable jquery object is if there are no elements in the set:
return (!this.length) ? this : (this[0].outerHTML || (function(){ ... })());
if the jquery set is empty (length 0) then it returns
thiswhich is the jquery object that is chainable. Otherwise it tries to return the native html
outerHTMLpropery. If the native html
outerHTMLproperty is null, then it calls the function, which also returns a string, not a jquery object.
This snippet also doesn’t perform this on each element of the set, it only performs it on the first element (
this[0]). However, that makes sense, it would be really weird to return an array of strings of html which would not be jquery objects.
what is the el used here? lost…
It’s “this[0]”. More about this | https://css-tricks.com/snippets/jquery/outerhtml-jquery-plugin/ | CC-MAIN-2015-48 | refinedweb | 309 | 66.84 |
Finding Lyrics. Here’s the code:
from System import Console
import urllib
from optparse import OptionParser
print "Starting"
parser = OptionParser()
parser.add_option("-i", "--user_id",
action="store", type="string", dest="user_id",
help="The user id for the Lyrics Fly service")
parser.add_option("-a", "--artist",
action="store", type="string", dest="artist",
help="Artist name")
parser.add_option("-t", "--title",
action="store", type="string", dest="title",
help="Song title")
(options, args) = parser.parse_args()
print "Parsed options"
if (options.user_id):
user_id = options.user_id
if (options.artist):
artist = options.artist
if (options.title):
title = options.title
print "Getting Lyrics for " + artist + " - " + title
query = urllib.urlencode([("i", user_id), ("a", artist), ("t", title)])
url = "?" + query
print url
data = urllib.urlopen(url)
print data.read()
print "Press any key to exit.."
Console.ReadKey()
It looks like Console.ReadKey() is the single line which wouldn’t work in CPython.
This would be:
import msvcrt
msvcrt.getch()
for windows. It’s a bit more tricky to have it on other platforms (tty functions needed), but doable. Alternatively:
print “Press Enter to continue…”
raw_input() # :D
Konrad
June 9, 2010 at 2:40 pm
I like gathering utile information , this post has got me even more info! .
Rusty Cwiklinski
December 29, 2011 at 10:44 pm | https://remark.wordpress.com/2010/06/04/finding-lyrics/ | CC-MAIN-2015-35 | refinedweb | 203 | 63.86 |
[Solved] QJsonDocument parsing issue Qt 5.3 vs. Qt 5.4 on OSX using C++11
Hi there,
I have problems parsing json since I upgraded to Qt 5.4.
Here is an example:
@#include <QCoreApplication>
#include <QJsonDocument>
#include <QJsonObject>
#include <QJsonArray>
#include <QDebug>
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
char jsString[] { "{\"results\":[{\"id\":1,\"title\":\"Test1\"},{\"id\":2,\"title\":\"" "Test2\"},{\"id\":3,\"title\":\"Test3\"},{\"id\":4,\"title\":\"Test4\"}]}" }; QJsonParseError *error { nullptr }; // parse bytes to json QJsonDocument doc { QJsonDocument::fromJson(jsString, error) }; if (error) { qDebug() << "error parsing json:" << error->errorString(); } else { QJsonObject rootObj { doc.object() }; QJsonArray results { rootObj.value("results").toArray() }; qDebug() << "results.count:" << results.count(); for (QJsonValue v : results) { qDebug() << "v:" << v.toObject().value("title").toString(); } } return a.exec();
}@
If I run this using Qt 5.3 all is fine. The output is:
@results.count: 4
v: "Test1"
v: "Test2"
v: "Test3"
v: "Test4"@
If I run this using Qt 5.4 I get this:
@results.count: 1
v: ""@
I run this on Mac OS X Yosemite 64-Bit with the clang compiler.
Has anyone an idea whats wrong?
Cheers,
Manromen
Hi and welcome to devnet,
Can't really comment on this, the only thing I can say is that it works fine when not using C++11
Hi,
thank you for your Response.
I replaced
@QJsonArray results { rootObj.value("results").toArray() };@
with:
@QJsonArray results = rootObj.value("results").toArray();@
So now it works ....
Sill, it's a bit strange that it worked with Qt 5.3 and not 5.4
Did you change anything else in between ? (e.g. compiler)
I just switched between the Kits:
Desktop Qt 5.3 clang 64bit
and
Desktop Qt 5.4.0 clang 64bit
Both Kits use the Clang (x86 64bit in /usr/bin).
Only difference I can see is that the Qt 5.3 Kit has no Debugger.
Then I'd recommend checking the "bug report system": to see if it's something known. If not please consider opening a new report providing a minimal compilable example to reproduce the behavior
- JKSH Moderators
The Qt 5.4 behaviour is the correct, "documented": one.
rootObj.value("results").toArray() returns a QJsonArray containing 4 elements. However, by using the initializer-list, this array is first converted into a single QJsonValue. Then, this single QJsonValue is stored in the outer array. That's why the outer array reports having 1 element.
Do this and you will see that you have a 2D array (more precisely, it's an array-within-an-array):
@
QJsonArray results{ rootObj.value("results").toArray() };
qDebug() << results;
@
Do this and you will see that you have a 1D array:
@
QJsonArray results = rootObj.value("results").toArray();
qDebug() << results;
@
Hi JKSH,
makes absolutely sense. Thank you very much for this great explanation!
Cheers,
Manromen
- JKSH Moderators
You're welcome :)
Okay… Just saw that my offline doc was from the wrong Qt version :D
Thanks for the explanation JKSH | https://forum.qt.io/topic/50282/solved-qjsondocument-parsing-issue-qt-5-3-vs-qt-5-4-on-osx-using-c-11/5 | CC-MAIN-2018-22 | refinedweb | 488 | 52.05 |
Ok, this warrants some explanation....
I use Windows to program, and up until a few months ago, i always used Microsoft Visual C++ IDE (2010, mainly).
But recently i wanted to start developing projects with multi platforms in mind (desktops only, no mobile), and also to experiment on using multiple compilers on the same project, to start writing portable code (both platform wise and compiler wise).
So, in order to do this, i started using Code::Blocks with both GCC and VC10 as the compilers.
My current project is fairly large, but hadn't been a problem until recently, that is, when compiling using GCC only.
I'll explain further.
When i compile my code using GCC, i get the error "file something.h": No such file or directory.
This would be trivial, if the file didn't actually exist, but i noticed that the problem is with the way GCC handles the relative paths to the included file.
Here's a concrete example:
In the "MaterialManager.h" file, i include:
#include "..\..\Shader Manager\_Manager\Program Shader Manager\ProgShaderManager.h"
Now, say that Renderer includes "MaterialManager.h" (which as above, in turn, includes "ProgShaderManager.h").
The problem, is that after a few nested includes, GCC expands this to something like:
D:\ZTUFF\Projects\EDGE\Source\Engine\Systems\Renderer\Render Engine\_Implementations\Render_GL_MultiPass\..\..\..\..\..\Gameplay\Core Objects\Light Object\..\..\..\Game\State Manager\..\..\Resource Managers\Material Manager\_Manager\MaterialManager.h
And this is what is printed in the build log, right before the "No such file or directory".
In my opinion, the reason it fails, is that it exceeds the maximum path size for relative paths, in WIndows (the large string above, has 250 characters, and i think that when GCC tries to append yet another file name, it exceeds the 260 characters).
I've confirmed this, in that, if i replace any #include path that gives an error, by the it's absolute path, it works.
For example:
"D:\ZTUFF\Projects\EDGE\Source\Engine\Resources\Material Resource\MaterialResource.h"
I thought about prepending a macro of the project source code's absolute path to each #include path, but i would like to avoid if possible.
I should mention again, that VC10 never gave me this sort of problem.
Again, this may end up being a simple thing, that i am simply unaware of, since i'm not that much experienced in GCC, and if someone could enlighten me in how to avoid this problem, I'd be quite thankful.
Thanks in advance, and if there's something that is not clear, I'll work to explain it better. | https://www.gamedev.net/topic/660367-include-file-not-found-in-gcc-due-to-path-size/ | CC-MAIN-2017-22 | refinedweb | 436 | 55.74 |
mbrlen
Determine the number of bytes that are required to complete a multibyte character in the current locale, with the capability of restarting in the middle of a multibyte character.
The mbrlen function inspects at most count bytes starting with the byte pointed to by str to determine the number of bytes that are required to complete the next multibyte character, including any shift sequences. It is equivalent to the call mbrtowc(NULL, str, count, &mbstate) where mbstate is either a user-provided mbstate_t object, or a static internal object provided by the library.
The mbrlen function saves and uses the shift state of an incomplete multibyte character in the mbstate parameter. This gives mbrlen the capability of restarting in the middle of a multibyte character if need be, examining at most count bytes. If mbstate is a null pointer, mbrlen uses an internal, static mbstate_t object to store the shift state. Because the internal mbstate_t object is not thread-safe, we recommend that you always allocate and pass your own mbstate parameter.
The mbrlen function differs from _mbclen, mblen, _mblen_l by its restartability. The shift state is stored in mbstate for subsequent calls to the same or other restartable functions. Results are undefined when mixing the use of restartable and nonrestartable functions. For example, an application should use wcsrlen instead of wcslen if a subsequent call to wcsrtombs is used instead of wcstombs.
For additional compatibility information, see Compatibility in the Introduction.
This example shows how the interpretation of multibyte characters depends on the current code page, and demonstrates the resuming capability of mbrlen.
// crt_mbrlen.c // Compile by using: cl crt_mbrlen.c #include <stdlib.h> #include <stdio.h> #include <string.h> #include <locale.h> #include <wchar.h> size_t Example(const char * pStr) { size_t charLen = 0; size_t charCount = 0; mbstate_t mbState = {0}; while ((charLen = mbrlen(pStr++, 1, &mbState)) != 0 && charLen != (size_t)-1) { if (charLen != (size_t)-2) // if complete mbcs char, { charCount++; } } return (charCount); } int main( void ) { int cp; size_t charCount = 0; const char *pSample = "\x82\xD0\x82\xE7\x82\xAA\x82\xC8: Shift-jis hiragana."; cp = _getmbcp(); charCount = Example(pSample); printf("\nCode page: %d\n%s\nCharacter count: %d\n", cp, pSample, charCount); setlocale(LC_ALL, "ja-JP"); // Set Japanese locale _setmbcp(932); // and Japanese multibyte code page cp = _getmbcp(); charCount = Example(pSample); printf("\nCode page: %d\n%s\nCharacter count: %d\n", cp, pSample, charCount); }
Code page: 0 é╨éτé¬é╚: Shift-jis hiragana. Character count: 29 Code page: 932 ????: Shift-jis hiragana. Character count: 25 | https://msdn.microsoft.com/en-US/library/tt1kc7c1(v=vs.120).aspx | CC-MAIN-2015-32 | refinedweb | 416 | 56.25 |
See also DublinCoreEntryStrawman.
Echo Example Using the Dublin Core Metadata Element Set
The page contains some examples and discussion to how Echo could leverage DublinCore in its syntax.
NOTE: This proposal would fold the DublinCore semantics/labels into the core Echo namespace NOT create a seperate modulized namesapce. The use of dc: in the text below is meant to clarify what is coming from DublinCore and what is not.
DublinCore elements are already in common used within RSS feeds to supplement the core item elements of title, description and link. The DublinCore has corresponding elements for rss:title and rss:description, but a specific tag for (perma)link is not explictedly defined. The dc:source tag can reasonable be considered the links counterpart in the DublinCore. So with the DublinCore you can assemble what almost looks like an RSS feed item.
Looking at the Echo ConceptualModel, all of the required elements and many of the highly-recommended optional elements have corresponding elements to the DublinCore. Further clarifications and restraints are likely needed in the context of Echo's use For example, accordingly to the DublinCore documentation, dc:source is said to." In Echo the formal identification system would be a URL/permalink.
Another example is the dc:creator and dc:publisher elements. Currently these elements are just strings. Echo may further defined their format and content, optionally allowing for additional meta rich extensions such as FOAF to be substituted.
Echo would also benefit from defining maximum lengths and element optionality.
PROS
Leverages prior art
Leverages an international standard
Is not a radical departure from RSS today
CONS
Tag naming my not always be ideal
Additional clarification and restraints are needed
Elements may not be as meta rich as preferred
These examples illustrate what such an approach for Echo may look like. It assumes that the DublinCore namespace () is part of the default namespace. These elements are wrapped in a container tag of entry. Content is embedded using the root or container tag of the native format (assuming it can be expressed in well formed XML). Alternatively a content:encoded with CDATA encoding could be used to embed non-well formed textual content. Binary sources should not be embedded, but reference via a dc:related link.
Core ConceptualModel Entry
<entry xmlns="uri/of/echo/namespace/"> <source></source> <creator>Paul Harrison ()</creator> <date>2003-06-25T10:42:00-04:00</date> <body xmlns=""> >
Extended Entry
<entry xmlns="uri/of/echo/namespace/"> <title>With a Little Help From My Friends</title> <description>You a, you're get. Do my with. What with how think, sad on would how you try own a if by help and a i sang.</description> <source></source> <creator>Paul Harrison ()</creator> <date>2003-06-25T10:42:00-04:00</date> <related></related> <related></related> <body xmlns=""> <subject>hello world</subject> <identifier>1056595208</identifier> <rights>Copyright 2003 Paul Harrison</rights> >
[KenMacLeod] In this example, it looks like <source> is being used where <identifier> should be used (see EntryIdentifier). '' appears to be the URI of the weblog Entry resource. I would expect <source> to be used to "reference to a resource from which the present resource is derived", where "present resource" is the weblog Entry. In other words, if the weblog entry was derived from some other resource, that URI would go into a <source> element.
I think this can be visualized more easily if you think of a weblog entry as '.../000000.html' and its metadata as '.../000000.echo'. A feed is a union of all the .echo files, and an entry is just the one. (Of course, in HTML one would put the metadata in <meta> elements, but I digress.)
Hmm, after a deeper look I'm more confused, more so than I'd consider just rewriting it. It appears that this example is effectively two resources. One resource can be seen in the <xhtml:body> element, and is the lyrics to a song, with rights, an identifer, and a subject (which should be a 'title'?) of "hello world". The second resource is the entry, everything outside the <xhtml:body>, with related entries, a creator, and a title of the entry. I can't tell if this confusion is from the example-nature of the data or something real. Again, maybe the visualization of .html/.echo (or HTML <meta> elements) will help see where I'm coming from.
[JamesSnell] I think this works just as well as anything else. I would like to see the above examples with namespaces included so I can get a better feel for how this comes together.
[KenMacLeod] Unless I'm mistaken, from the context of the outer element being <entry> and no qualifier on the inner elements, this is an example of deriving from DublinCore (derived as in "echo:date" IS-A "dc:date"), rather than using DublinCore terms directly. On that understanding, I've added just a default namespace to the <entry> element to clarify. See also DublinCoreEntryStrawman, EntryIdentifier, and TimestampVsCreationDateTime.
[DeveloperDude] +1 re-use of DublinCore.
The usage of DublinCore aggregated into the primary namespace is confusing and not helpful.
[TimothyAppnel] Ken is correct. What I propose is to simply fold the Dublin Core semantics and naming into echo.
[JamesSnell] Works for me
[RalphBrandi] If we're folding in Dublin Core, does that mean that <date.created> and <date.modified> would be valid? I believe just using <date> as in the example here is ambiguous, and Dublin Core allows for refinements to top-level elements.
[TimothyAppnel] Dublin Core does allow for more refinement see the
Qualified Dublin Core (dcterms) module that was created for RSS 1.0
[KenMacLeod] I'm not sure of the extent we're talking about folding in or deriving from DublinCore. On the one hand we could be taking all the DublinCore terms and using them in the WellFormedEntry namespace, in which case the answer would be "yes". On the other hand, we could be creating specific terms in our namespace, and then using DublinCore definitions for them, in which case the answer would be "no or not necessarily, it would be a matter of choosing which terms we'll support". I'm comfortable with a "core" the size of DublinCore, but it appears that is an option that a lot of people are not comfortable with.
[TimothyAppnel] I was genrally thinking of taking all the DublinCore terms and using them in the WellFormedEntry namespace. While there seems to be a consensus towards a small simple core that minimizes the need to use namespaces in the basic usage patterns, it doesn't seem that there is consensus or agreement to what is or is not out of bounds. Hence my proposal to grab all the terms which I think covers all of the items being debated for use in the core. I'm not wedded to any specific terms. I am that we base this work on already existing prior art that is in use and has served us well.
[KevinBurton] Please do not fold the dublin core namespace. Is dc: and dcterms: please. Using a new namespace will destroy all the semantics that dublin core has achieved.
[TimothyAppnel] How? Namespaces don't bother me personally, but I think those who have asked to minimize the use of namespaces in common usage patterns have a point that needs to be explored. You need to ellaborate your point because I don't see wha tyou are talking about. (PS: Word of advice. I suggest you don't use the term "semantic web" or ontology or any thing of the sort if you want me to take you seriously.)
[KenMacLeod] Yes, minimization of or restricting to one core namespace has a strong consensus. Using DublinCore semantics to define WellFormedEntry elements alse seems to have very strong support, if not consensus. DublinCore has many precedents for using it as a reference in defining ones own terms, even if they are simply "is the same as". Those who need to equate the semantics at the exchange format level generally also have the tools to do so. See also RdfAndEcho.
[AsbjornUlsberg, RefactorOk] I just want to say it here as well, so it comes through: Dublin Core's use of the word "element" does not coherent with an XML element. A DC "element" is an entity, or a concept. The point is that DC standardizes entities; what they should be called, and what the meaning of them are. How these entities are implemented is up to us. We can use them as XML elements, as attributes, as attribute values; whatever. The main point is to use the same naming convention and put the same meaning in the entity as DC does.
CategoryArchitecture, CategoryMetadata, CategoryModel, CategorySyntax | http://www.intertwingly.net/wiki/pie/EchoInDublinCore?action=print | CC-MAIN-2017-47 | refinedweb | 1,454 | 54.73 |
Description
The Neighborhood Time Step value provided is not numeric, is less than 1, or is larger than the total number of time steps in the Input Space Time Cube parameter .
Solution
Determine the number of time steps associated with the Input Space Time Cube value and provide a value smaller than that number (not more than 75 percent of the total number of time steps) but no smaller than 1. Details, including the total number of time steps, are written as messages when the cube is created (using Create Space Time Cube). You can also access cube dimensions by entering the following code in the Python window:
import arcpy in_netcdf = "c:/working/fire.nc" nc_fp = arcpy.NetCDFFileProperties(in_netcdf) print("\tSize of the time dimension: {0}".format(nc_fp.getDimensionSize('time'))) | https://pro.arcgis.com/en/pro-app/latest/tool-reference/tool-errors-and-warnings/110001-120000/tool-errors-and-warnings-110001-110025-110022.htm | CC-MAIN-2022-27 | refinedweb | 130 | 51.18 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.