text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Pipes.Tutorial
Contents
Description
Conventional Haskell stream programming forces you to choose only two of the following three features:
- Effects
- Streaming
- Composability
If you sacrifice Effects you get Haskell's pure and lazy lists, which you can transform using composable functions in constant space, but without interleaving effects.
If you sacrifice Streaming you get
mapM,
forM and
"ListT done wrong", which are composable and effectful, but do not return
a single result until the whole list has first been processed and loaded
into memory.
If you sacrifice Composability you write a tightly coupled read,
transform, and write loop in
IO, which is streaming and effectful, but is
not modular or separable.
pipes gives you all three features: effectful, streaming, and composable
programming.
pipes also provides a wide variety of stream programming
abstractions which are all subsets of a single unified machinery:
- effectful
Producers (like generators),
- effectful
Consumers (like iteratees),
- effectful
Pipes (like Unix pipes), and:
ListTdone right.
All of these are connectable and you can combine them together in clever and unexpected ways because they all share the same underlying type.
pipes requires a basic understanding of monad transformers, which you can
- the paper "Monad Transformers - Step by Step",
- chapter 18 of "Real World Haskell" on monad transformers, or:
- the documentation of the
transformerslibrary.
If you want a Quick Start guide to
pipes, read the documentation in
Pipes.Prelude from top to bottom.
This tutorial is more extensive and explains the
pipes API in greater
detail and illustrates several idioms.
Synopsis
Introduction
The
pipes library decouples stream processing stages from each other so
that you can mix and match diverse stages to produce useful streaming
programs. If you are a library writer,
pipes lets you package up
streaming components into a reusable interface. If you are an application
writer,
pipes lets you connect pre-made streaming components with minimal
effort to produce a highly-efficient program that streams data in constant
memory.
To enforce loose coupling, components can only communicate using two commands:
pipes has four types of components built around these two commands:
Producers can only
yieldvalues and they model streaming sources
Consumers can only
awaitvalues and they model streaming sinks
Pipes can both
yieldand
awaitvalues and they model stream transformations
Effects can neither
yieldnor
awaitand they model non-streaming components
You can connect these components together in four separate ways which parallel the four above types:
forhandles
yields
- (
>~) handles
awaits
- (
>->) handles both
yields and
awaits
- (
>>=) handles return values
As you connect components their types will change to reflect inputs and
outputs that you've fused away. You know that you're done connecting things
when you get an
Effect, meaning that you have handled all inputs and
outputs. You run this final
Effect to begin streaming.
Producers
Producers are effectful streams of input. Specifically, a
Producer is a
monad transformer that extends any base monad with a new
yield command.
This
yield command lets you send output downstream to an anonymous
handler, decoupling how you generate values from how you consume them.
The following
stdinLn
Producer shows how to incrementally read in
Strings from standard input and
yield them downstream, terminating
gracefully when reaching the end of the input:
-- echo.hs import Control.Monad (unless) import Pipes import System.IO (isEOF) -- +--------+-- A 'Producer' that yields 'String's -- | | -- | | +-- Every monad transformer has a base monad. -- | | | This time the base monad is 'IO'. -- | | | -- | | | +-- Every monadic action has a return value. -- | | | | This action returns '()' when finished -- v v v v stdinLn :: Producer String IO () stdinLn = do eof <- lift isEOF -- 'lift' an 'IO' action from the base monad unless eof $ do str <- lift getLine yield str -- 'yield' the 'String' stdinLn -- Loop
yield emits a value, suspending the current
Producer until the value is
consumed. If nobody consumes the value (which is possible) then
yield
never returns. You can think of
yield as having the following type:
yield::
Monadm => a ->
Producera m ()
The true type of
yield is actually more general and powerful. Throughout
the tutorial I will present type signatures like this that are simplified at
first and then later reveal more general versions. So read the above type
signature as simply saying: "You can use
yield within a
Producer, but
you may be able to use
yield in other contexts, too."
Click the link to
yield to navigate to its documentation. There you will
see that
yield actually uses the
Producer' (with an apostrophe) type
synonym which hides a lot of polymorphism behind a simple veneer. The
documentation for
yield says that you can also use
yield within a
Pipe, too, because of this polymorphism:
yield::
Monadm => a ->
Pipex a m ()
Use simpler types like these to guide you until you understand the fully general type.
for loops are the simplest way to consume a
Producer like
stdinLn.
for has the following type:
-- +-- Producer +-- The body of the +-- Result -- | to loop | loop | -- v over v v -- -------------- ------------------ ----------
for::
Monadm =>
Producera m r -> (a ->
Effectm ()) ->
Effectm r
(for producer body) loops over
(producer), substituting each
yield in
(producer) with
(body).
You can also deduce that behavior purely from the type signature:
- The body of the loop takes exactly one argument of type
(a), which is the same as the output type of the
Producer. Therefore, the body of the loop must get its input from that
Producerand nowhere else.
- The return value of the input
Producermatches the return value of the result, therefore
formust loop over the entire
Producerand not skip anything.
The above type signature is not the true type of
for, which is actually
more general. Think of the above type signature as saying: "If the first
argument of
for is a
Producer and the second argument returns an
Effect, then the final result must be an
Effect."
Click the link to
for to navigate to its documentation. There you will
see the fully general type and underneath you will see equivalent simpler
types. One of these says that if the body of the loop is a
Producer, then
the result is a
Producer, too:
for::
Monadm =>
Producera m r -> (a ->
Producerb m ()) ->
Producerb m r
The first type signature I showed for
for was a special case of this
slightly more general signature because a
Producer that never
yields is
also an
Effect:
data
Void-- The uninhabited type type
Effectm r =
Producer
Voidm r
This is why
for permits two different type signatures. The first type
signature is just a special case of the second one:
for::
Monadm =>
Producera m r -> (a ->
Producerb m ()) ->
Producerb m r -- Specialize 'b' to 'Void'
for::
Monadm =>
Producera m r -> (a ->
Producer
Voidm ()) ->
Producer
Voidm r -- Producer Void = Effect
for::
Monadm =>
Producera m r -> (a ->
Effectm ()) ->
Effectm r
This is the same trick that all
pipes functions use to work with various
combinations of
Producers,
Consumers,
Pipes, and
Effects. Each
function really has just one general type, which you can then simplify down
to multiple useful alternative types.
Here's an example use of a
for
loop, where the second argument (the
loop body) is an
Effect:
-- echo.hs loop :: Effect IO () loop = for stdinLn $ \str -> do -- Read this like: "for str in stdinLn" lift $ putStrLn str -- The body of the 'for' loop -- more concise: loop = for stdinLn (lift . putStrLn)
In this example,
for loops over
stdinLn and replaces every
yield in
stdinLn with the body of the loop, printing each line. This is exactly
equivalent to the following code, which I've placed side-by-side with the
original definition of
stdinLn for comparison:
loop = do | stdinLn = do eof <- lift isEOF | eof <- lift isEOF unless eof $ do | unless eof $ do str <- lift getLine | str <- lift getLine (lift . putStrLn) str | yield str loop | stdinLn
You can think of
yield as creating a hole and a
for loop is one way to
fill that hole.
Notice how the final
loop only
lifts actions from the base monad and
does nothing else. This property is true for all
Effects, which are just
glorified wrappers around actions in the base monad. This means we can run
these
Effects to remove their
lifts and lower them back to the
equivalent computation in the base monad:
runEffect::
Monadm =>
Effectm r -> m r
This is the real type signature of
runEffect, which refuses to accept
anything other than an
Effect. This ensures that we handle all inputs and
outputs before streaming data:
-- echo.hs main :: IO () main = runEffect loop
... or you could inline the entire
loop into the following one-liner:
main = runEffect $ for stdinLn (lift . putStrLn)
Our final program loops over standard input and echoes every line to
standard output until we hit
Ctrl-D to end the input stream:
$ ghc -O2 echo.hs $ ./echo Test<Enter> Test ABC<Enter> ABC <Ctrl-D> $
The final behavior is indistinguishable from just removing all the
lifts
from
loop:
main = do | loop = do eof <- isEof | eof <- lift isEof unless eof $ do | unless eof $ do str <- getLine | str <- lift getLine putStrLn str | (lift . putStrLn) str main | loop
This
main is what we might have written by hand if we were not using
pipes, but with
pipes we can decouple the input and output logic from
each other. When we connect them back together, we still produce streaming
code equivalent to what a sufficiently careful Haskell programmer would
have written.
You can also use
for to loop over lists, too. To do so, convert the list
to a
Producer using
each, which is exported by default from Pipes:
each :: (Monad m) => [a] -> Producer a m () each as = mapM_ yield as
Combine
for and
each to iterate over lists using a "foreach" loop:
>>>
runEffect $ for (each [1..4]) (lift . print)1 2 3 4
each is actually more general and works for any
Foldable:
each:: (
Monadm,
Foldablef) => f a ->
Producera m ()
So you can loop over any
Foldable container or even a
Maybe:
>>>
runEffect $ for (each (Just 1)) (lift . print)1
Composability
You might wonder why the body of a
for loop can be a
Producer. Let's
test out this feature by defining a new loop body that
duplicates every
value:
-- nested.hs import Pipes import qualified Pipes.Prelude as P -- Pipes.Prelude already has 'stdinLn' duplicate :: (Monad m) => a -> Producer a m () duplicate x = do yield x yield x loop :: Producer String IO () loop = for P.stdinLn duplicate -- This is the exact same as: -- -- loop = for P.stdinLn $ \x -> do -- yield x -- yield x
This time our
loop is a
Producer that outputs
Strings, specifically
two copies of each line that we read from standard input. Since
loop is a
Producer we cannot run it because there is still unhandled output.
However, we can use yet another
for to handle this new duplicated stream:
-- nested.hs main = runEffect $ for loop (lift . putStrLn)
This creates a program which echoes every line from standard input to standard output twice:
$ ./nested Test<Enter> Test Test ABC<Enter> ABC ABC <Ctrl-D> $
But is this really necessary? Couldn't we have instead written this using a nested for loop?
main = runEffect $ for P.stdinLn $ \str1 -> for (duplicate str1) $ \str2 -> lift $ putStrLn str2
Yes, we could have! In fact, this is a special case of the following equality, which always holds no matter what:
-- s :: (Monad m) =>
Producera m () -- i.e. 'P.stdinLn' -- f :: (Monad m) => a ->
Producerb m () -- i.e. 'duplicate' -- g :: (Monad m) => b ->
Producerc m () -- i.e. '(lift . putStrLn)' for (for s f) g = for s (\x -> for (f x) g)
We can understand the rationale behind this equality if we first define the
following operator that is the point-free counterpart to
for:
(~>) :: (Monad m) => (a ->
Producerb m r) -> (b ->
Producerc m r) -> (a ->
Producerc m r) (f ~> g) x = for (f x) g
Using (
~>) (pronounced "into"), we can transform our original equality
into the following more symmetric equation:
f :: (Monad m) => a ->
Producerb m r g :: (Monad m) => b ->
Producerc m r h :: (Monad m) => c ->
Producerd m r -- Associativity (f ~> g) ~> h = f ~> (g ~> h)
This looks just like an associativity law. In fact, (
~>) has another nice
property, which is that
yield is its left and right identity:
-- Left Identity yield ~> f = f
-- Right Identity f ~> yield = f
In other words,
yield and (
~>) form a
Category, specifically the
generator category, where (
~>) plays the role of the composition operator
and
yield is the identity. If you don't know what a
Category is, that's
okay, and category theory is not a prerequisite for using
pipes. All you
really need to know is that
pipes uses some simple category theory to keep
the API intuitive and easy to use.
Notice that if we translate the left identity law to use
for instead of
(
~>) we get:
for (yield x) f = f x
This just says that if you iterate over a pure single-element
Producer,
then you could instead cut out the middle man and directly apply the body of
the loop to that single element.
If we translate the right identity law to use
for instead of (
~>) we
get:
for s yield = s
This just says that if the only thing you do is re-
yield every element of
a stream, you get back your original stream.
These three "for loop" laws summarize our intuition for how
for loops
should behave and because these are
Category laws in disguise that means
that
Producers are composable in a rigorous sense of the word.
In fact, we get more out of this than just a bunch of equations. We also
get a useful operator: (
~>). We can use this operator to condense
our original code into the following more succinct form that composes two
transformations:
main = runEffect $ for P.stdinLn (duplicate ~> lift . putStrLn)
This means that we can also choose to program in a more functional style and
think of stream processing in terms of composing transformations using
(
~>) instead of nesting a bunch of
for loops.
The above example is a microcosm of the design philosophy behind the
pipes
library:
- Define the API in terms of categories
- Specify expected behavior in terms of category laws
- Think compositionally instead of sequentially
Consumers
Sometimes you don't want to use a
for loop because you don't want to consume
every element of a
Producer or because you don't want to process every
value of a
Producer the exact same way.
The most general solution is to externally iterate over the
Producer using
the
next command:
Monadm =>
Producera m r -> m (
Eitherr (a,
Producera m r))
Think of
next as pattern matching on the head of the
Producer. This
Either returns a
Left if the
Producer is done or it returns a
Right
containing the next value,
a, along with the remainder of the
Producer.
However, sometimes we can get away with something a little more simple and
elegant, like a
Consumer, which represents an effectful sink of values. A
Consumer is a monad transformer that extends the base monad with a new
await command. This
await command lets you receive input from an
anonymous upstream source.
The following
stdoutLn
Consumer shows how to incrementally
await
Strings and print them to standard output, terminating gracefully when
receiving a broken pipe error:
import Control.Monad (unless) import Control.Exception (try, throwIO) import qualified GHC.IO.Exception as G import Pipes -- +--------+-- A 'Consumer' that awaits 'String's -- | | -- v v stdoutLn :: Consumer String IO () stdoutLn = do str <- await -- 'await' a 'String' x <- lift $ try $ putStrLn str case x of -- Gracefully terminate if we got a broken pipe error Left e@(G.IOError { G.ioe_type = t}) -> lift $ unless (t == G.ResourceVanished) $ throwIO e -- Otherwise loop Right () -> stdoutLn
await is the dual of
yield: we suspend our
Consumer until we receive a
new value. If nobody provides a value (which is possible) then
await
never returns. You can think of
await as having the following type:
await::
Monadm =>
Consumera m a
One way to feed a
Consumer is to repeatedly feed the same input using
using (
>~) (pronounced "feed"):
-- +- Feed +- Consumer to +- Returns new -- | action | feed | Effect -- v v v -- ---------- -------------- ---------- (
>~) ::
Monadm =>
Effectm b ->
Consumerb m c ->
Effectm c
(draw >~ consumer) loops over
(consumer), substituting each
await in
(consumer) with
(draw).
So the following code replaces every
await in
stdoutLn with
(lift getLine) and then removes all the
lifts:
>>>
runEffect $ lift getLine >~ stdoutLnTest<Enter> Test ABC<Enter> ABC 42<Enter> 42 ...
You might wonder why (
>~) uses an
Effect instead of a raw action in the
base monad. The reason why is that (
>~) actually permits the following
more general type:
(
>~) ::
Monadm =>
Consumera m b ->
Consumerb m c ->
Consumera m c
(
>~) is the dual of (
~>), composing
Consumers instead of
Producers.
This means that you can feed a
Consumer with yet another
Consumer so
that you can
await while you
await. For example, we could define the
following intermediate
Consumer that requests two
Strings and returns
them concatenated:
doubleUp :: (Monad m) => Consumer String m String doubleUp = do str1 <- await str2 <- await return (str1 ++ str2) -- more concise: doubleUp = (++) <$> await <*> await
We can now insert this in between
(lift getLine) and
stdoutLn and see
what happens:
>>>
runEffect $ lift getLine >~ doubleUp >~ stdoutLnTest<Enter> ing<Enter> Testing ABC<Enter> DEF<Enter> ABCDEF 42<Enter> 000<Enter> 42000 ...
doubleUp splits every request from
stdoutLn into two separate requests
and
returns back the concatenated result.
We didn't need to parenthesize the above chain of (
>~) operators, because
(
>~) is associative:
-- Associativity (f >~ g) >~ h = f >~ (g >~ h)
... so we can always omit the parentheses since the meaning is unambiguous:
f >~ g >~ h
Also, (
>~) has an identity, which is
await!
-- Left identity await >~ f = f -- Right Identity f >~ await = f
In other words, (
>~) and
await form a
Category, too, specifically the
iteratee category, and
Consumers are also composable.
Pipes
Our previous programs were unsatisfactory because they were biased either
towards the
Producer end or the
Consumer end. As a result, we had to
choose between gracefully handling end of input (using
stdinLn) or
gracefully handling end of output (using
stdoutLn), but not both at the
same time.
However, we don't need to restrict ourselves to using
Producers
exclusively or
Consumers exclusively. We can connect
Producers and
Consumers directly together using (
>->) (pronounced "pipe"):
(
>->) ::
Monadm =>
Producera m r ->
Consumera m r ->
Effectm r
This returns an
Effect which we can run:
-- echo2.hs import Pipes import qualified Pipes.Prelude as P -- Pipes.Prelude also provides 'stdoutLn' main = runEffect $ P.stdinLn >-> P.stdoutLn
This program is more declarative of our intent: we want to stream values
from
stdinLn to
stdoutLn. The above "pipeline" not only echoes
standard input to standard output, but also handles both end of input and
broken pipe errors:
$ ./echo2 Test<Enter> Test ABC<Enter> ABC 42<Enter> 42 <Ctrl-D> $
(
>->) is "pull-based" meaning that control flow begins at the most
downstream component (i.e.
stdoutLn in the above example). Any time a
component
awaits a value it blocks and transfers control upstream and
every time a component
yields a value it blocks and restores control back
downstream, satisfying the
await. So in the above example, (
>->)
matches every
await from
stdoutLn with a
yield from
stdinLn.
Streaming stops when either
stdinLn terminates (i.e. end of input) or
stdoutLn terminates (i.e. broken pipe). This is why (
>->) requires
that both the
Producer and
Consumer share the same type of return value:
whichever one terminates first provides the return value for the entire
Effect.
Let's test this by modifying our
Producer and
Consumer to each return a
diagnostic
String:
-- echo3.hs import Control.Applicative ((<$)) -- (<$) modifies return values import Pipes import qualified Pipes.Prelude as P import System.IO main = do hSetBuffering stdout NoBuffering str <- runEffect $ ("End of input!" <$ P.stdinLn) >-> ("Broken pipe!" <$ P.stdoutLn) hPutStrLn stderr str
This lets us diagnose whether the
Producer or
Consumer terminated first:
$ ./echo3 Test<Enter> Test <Ctrl-D> End of input! $ ./echo3 | perl -e 'close STDIN' Test<Enter> Broken pipe! $
You might wonder why (
>->) returns an
Effect that we have to run instead
of directly returning an action in the base monad. This is because you can
connect things other than
Producers and
Consumers, like
Pipes, which
are effectful stream transformations.
A
Pipe is a monad transformer that is a mix between a
Producer and
Consumer, because a
Pipe can both
await and
yield. The following
example
Pipe is analagous to the Prelude's
take, only allowing a fixed
number of values to flow through:
-- take.hs import Control.Monad (replicateM_) import Pipes import Prelude hiding (take) -- +--------- A 'Pipe' that -- | +---- 'await's 'a's and -- | | +-- 'yield's 'a's -- | | | -- v v v take :: Int -> Pipe a a IO () take n = do replicateM_ n $ do -- Repeat this block 'n' times x <- await -- 'await' a value of type 'a' yield x -- 'yield' a value of type 'a' lift $ putStrLn "You shall not pass!" -- Fly, you fools!
You can use
Pipes to transform
Producers,
Consumers, or even other
Pipes using the same (
>->) operator:
(
>->) ::
Monadm =>
Producera m r ->
Pipea b m r ->
Producerb m r (
>->) ::
Monadm =>
Pipea b m r ->
Consumerb m r ->
Consumera m r (
>->) ::
Monadm =>
Pipea b m r ->
Pipeb c m r ->
Pipea c m r
For example, you can compose
take after
stdinLn to limit the number
of lines drawn from standard input:
maxInput :: Int -> Producer String IO () maxInput n = P.stdinLn >-> take n
>>>
runEffect $ maxInput 3 >-> P.stdoutLnTest<Enter> Test ABC<Enter> ABC 42<Enter> 42 You shall not pass!
>>>
... or you can pre-compose
take before
stdoutLn to limit the number
of lines written to standard output:
maxOutput :: Int -> Consumer String IO () maxOutput n = take n >-> P.stdoutLn
>>>
runEffect $ P.stdinLn >-> maxOutput 3<Exact same behavior>
Those both gave the same behavior because (
>->) is associative:
(p1 >-> p2) >-> p3 = p1 >-> (p2 >-> p3)
Therefore we can just leave out the parentheses:
>>>
runEffect $ P.stdinLn >-> take 3 >-> P.stdoutLn<Exact same behavior>
(
>->) is designed to behave like the Unix pipe operator, except with less
quirks. In fact, we can continue the analogy to Unix by defining
cat
(named after the Unix
cat utility), which reforwards elements endlessly:
cat :: (Monad m) => Pipe a a m r cat = forever $ do x <- await yield x
cat is the identity of (
>->), meaning that
cat satisfies the
following two laws:
-- Useless use of 'cat' cat >-> p = p -- Forwarding output to 'cat' does nothing p >-> cat = p
Therefore, (
>->) and
cat form a
Category, specifically the category of
Unix pipes, and
Pipes are also composable.
A lot of Unix tools have very simple definitions when written using
pipes:
-- unix.hs import Control.Monad (forever) import Pipes import qualified Pipes.Prelude as P -- Pipes.Prelude provides 'take', too import Prelude hiding (head) head :: (Monad m) => Int -> Pipe a a m () head = P.take yes :: (Monad m) => Producer String m r yes = forever $ yield "y" main = runEffect $ yes >-> head 3 >-> P.stdoutLn
This prints out 3 '
y's, just like the equivalent Unix pipeline:
$ ./unix y y y $ yes | head -3 y y y $
This lets us write "Haskell pipes" instead of Unix pipes. These are much easier to build than Unix pipes and we can connect them directly within Haskell for interoperability with the Haskell language and ecosystem.
ListT
pipes also provides a "ListT done right" implementation. This differs
from the implementation in
transformers because this
ListT:
- obeys the monad laws, and
- streams data immediately instead of collecting all results into memory.
The latter property is actually an elegant consequence of obeying the monad laws.
To bind a list within a
ListT computation, combine
Select and
each:
import Pipes pair :: ListT IO (Int, Int) pair = do x <- Select $ each [1, 2] lift $ putStrLn $ "x = " ++ show x y <- Select $ each [3, 4] lift $ putStrLn $ "y = " ++ show y return (x, y)
You can then loop over a
ListT by using
every:
every::
Monadm =>
ListTm a ->
Producera m ()
So you can use your
ListT within a
for loop:
>>>
runEffect $ for (every pair) (lift . print)x = 1 y = 3 (1,3) y = 4 (1,4) x = 2 y = 3 (2,3) y = 4 (2,4)
... or a pipeline:
>>>
import qualified Pipes.Prelude as P
>>>
runEffect $ every pair >-> P.print<Exact same behavior>
Note that
ListT is lazy and only produces as many elements as we request:
>>>
runEffect $ for (every pair >-> P.take 2) (lift . print)x = 1 y = 3 (1,3) y = 4 (1,4)
You can also go the other way, binding
Producers directly within a
ListT. In fact, this is actually what
Select was already doing:
Producera m () ->
ListTm a
This lets you write crazy code like:
import Pipes import qualified Pipes.Prelude as P input :: Producer String IO () input = P.stdinLn >-> P.takeWhile (/= "quit") name :: ListT IO String name = do firstName <- Select input lastName <- Select input return (firstName ++ " " ++ lastName)
Here we're binding standard input non-deterministically (twice) as if it were an effectful list:
>>>
runEffect $ every name >-> P.stdoutLnDaniel<Enter> Fischer<Enter> Daniel Fischer Wagner<Enter> Daniel Wagner quit<Enter> Donald<Enter> Stewart<Enter> Donald Stewart Duck<Enter> Donald Duck quit<Enter> quit<Enter>
>>>
Notice how this streams out values immediately as they are generated, rather than building up a large intermediate result and then printing all the values in one batch at the end.
Tricks
pipes is more powerful than meets the eye so this section presents some
non-obvious tricks you may find useful.
Many pipe combinators will work on unusual pipe types and the next few
examples will use the
cat pipe to demonstrate this.
For example, you can loop over the output of a
Pipe using
for, which is
how
map is defined:
map :: (Monad m) => (a -> b) -> Pipe a b m r map f = for cat $ \x -> yield (f x) -- Read this as: For all values flowing downstream, apply 'f'
This is equivalent to:
map f = forever $ do x <- await yield (f x)
You can also feed a
Pipe input using (
>~). This means we could have
instead defined the
yes pipe like this:
yes :: (Monad m) => Producer String m r yes = return "y" >~ cat -- Read this as: Keep feeding "y" downstream
This is equivalent to:
yes = forever $ yield "y"
You can also sequence two
Pipes together. This is how
drop is
defined:
drop :: (Monad m) => Int -> Pipe a a m r drop n = do replicateM_ n await cat
This is equivalent to:
drop n = do replicateM_ n await forever $ do x <- await yield x
You can even compose pipes inside of another pipe:
customerService :: Producer String IO () customerService = do each [ "Hello, how can I help you?" -- Begin with a script , "Hold for one second." ] P.stdinLn >-> P.takeWhile (/= "Goodbye!") -- Now continue with a human
Also, you can often use
each in conjunction with (
~>) to traverse nested
data structures. For example, you can print all non-
Nothing elements
from a doubly-nested list:
>>>
runEffect $ (each ~> each ~> each ~> lift . print) [[Just 1, Nothing], [Just 2, Just 3]]1 2 3
Another neat thing to know is that
every has a more general type:
every:: (
Enumerablet) => t m a ->
Producera m ()
Enumerable generalizes
Foldable and if you have an effectful container
of your own that you want others to traverse using
pipes, just have your
container implement the
toListT method of the
Enumerable class:
class Enumerable t where toListT :: (Monad m) => t m a -> ListT m a
You can even use
Enumerable to traverse effectful types that are not even
proper containers, like
MaybeT:
input :: MaybeT IO Int input = do str <- lift getLine guard (str /= "Fail")
>>>
runEffect $ every input >-> P.stdoutLnTest<Enter> Test
>>>
runEffect $ every input >-> P.stdoutLnFail<Enter>
>>>
Conclusion
This tutorial covers the concepts of connecting, building, and reading
pipes code. However, this library is only the core component in an
ecosystem of streaming components. Derived libraries that build immediately
upon
pipes include:
pipes-concurrency: Concurrent reactive programming and message passing
pipes-parse: Minimal utilities for stream parsing
pipes-safe: Resource management and exception safety for
pipes
These libraries provide functionality specialized to common streaming
domains. Additionally, there are several libraries on Hackage that provide
even higher-level functionality, which you can find by searching under the
"Pipes" category or by looking for packages with a
pipes- prefix in
their name. Current examples include:
pipes-network/
pipes-network-tls: Networking
pipes-zlib: Compression and decompression
pipes-binary: Binary serialization
pipes-attoparsec: High-performance parsing
pipes-aeson: JSON serialization and deserialization
Even these derived packages still do not explore the full potential of
pipes functionality, which actually permits bidirectional communication.
Advanced
pipes users can explore this library in greater detail by
studying the documentation in the Pipes.Core module to learn about the
symmetry of the underlying
Proxy type and operators.
To learn more about
pipes, ask questions, or follow
pipes development,
you can subscribe to the
haskell-pipes mailing list at:
... or you can mail the list directly at:
mailto:haskell-pipes@googlegroups.com
Additionally, for questions regarding types or type errors, you might find the following appendix on types very useful.
Appendix: Types
pipes uses parametric polymorphism (i.e. generics) to overload all
operations. You've probably noticed this overloading already::
yieldworks within both
Producers and
Pipes
awaitworks within both
Consumers and
Pipes
- (
>->) connects
Producers,
Consumers, and
Pipes in varying ways
This overloading is great when it works, but when connections fail they produce type errors that appear intimidating at first. This section explains the underlying types so that you can work through type errors intelligently.
Producers,
Consumers,
Pipes, and
Effects are all special cases of a
single underlying type: a
Proxy. This overarching type permits fully
bidirectional communication on both an upstream and downstream interface.
You can think of it as having the following shape:
Proxy a' a b' b m r Upstream | Downstream +---------+ | | a' <== <== b' -- Information flowing upstream | | a ==> ==> b -- Information flowing downstream | | | +----|----+ v r
The four core types do not use the upstream flow of information. This means
that the
a' and
b' in the above diagram go unused unless you use the
more advanced features provided in Pipes.Core.
pipes uses type synonyms to hide unused inputs or outputs and clean up
type signatures. These type synonyms come in two flavors:
- Concrete type synonyms that explicitly close unused inputs and outputs of the
Proxytype
- Polymorphic type synonyms that don't explicitly close unused inputs or outputs
The concrete type synonyms use
() to close unused inputs and
Void (the
uninhabited type) to close unused outputs:
type Effect = Proxy Void () () Void Upstream | Downstream +---------+ | | Void <== <== () | | () ==> ==> Void | | | +----|----+ v r
type Producer b = Proxy Void () () b Upstream | Downstream +---------+ | | Void <== <== () | | () ==> ==> b | | | +----|----+ v r
type Consumer a = Proxy () a () Void Upstream | Downstream +---------+ | | () <== <== () | | a ==> ==> Void | | | +----|----+ v r
type Pipe a b = Proxy () a () b Upstream | Downstream +---------+ | | () <== <== () | | a ==> ==> b | | | +----|----+ v r
When you compose
Proxys
*):
type Producer' b m r = forall x' x . Proxy x' x () b m r Upstream | Downstream +---------+ | | * <== <== () | | * ==> ==> b | | | +----|----+ v r
type Consumer' a m r = forall y' y . Proxy () a y' y m r Upstream | Downstream +---------+ | | () <== <== * | | a ==> ==> * | | | +----|----+ v r
type Effect' m r = forall x' x y' y . Proxy x' x y' y m r Upstream | Downstream +---------+ | | * <== <== * | | * ==> ==> * | | | +----|----+ v r
Note that there is no polymorphic generalization of a
Pipe.
Like before, if you compose a
Producer', a
Pipe, and a
Consumer':
Producer' Pipe Consumer' +-----------+ +----------+ +------------+ | | | | | | * <== <== () <== <== () <== <== * | stdinLn | | take 3 | | stdoutLn | * ==> ==> String ==> ==> String ==> ==> * | | | | | | | | | +-----|-----+ +-----|----+ +------|-----+ v v v () () ()
... they fuse into an
Effect':
Effect' +-----------------------------------+ | | * <== <== * | stdinLn >-> take 3 >-> stdoutLn | * ==> ==> * | | +----------------|------------------+ v ()
Polymorphic type synonyms come in handy when you want to keep the type as
general as possible. For example, the type signature for
yield uses
Producer' to keep the type signature simple while still leaving the
upstream input end open:
yield::
Monadm => a ->
Producer'a m ()
This type signature lets us use
yield within a
Pipe, too, because the
Pipe type synonym is a special case of the polymorphic
Producer' type
synonym:
type
Producer'b m r = forall x' x .
Proxyx' x () b m r type
Pipea b m r =
Proxy() a () b m r
The same is true for
await, which uses the polymorphic
Consumer' type
synonym:
await::
Monadm =>
Consumer'a m a
We can use
await within a
Pipe because a
Pipe is a special case of the
polymorphic
Consumer' type synonym:
type
Consumer'a m r = forall y' y .
Proxy() a y' y m r type
Pipea b m r =
Proxy() a () b m r
However, polymorphic type synonyms cause problems in many other cases:
- They induce higher-rank types and require you to enable the
RankNTypesextensionextension:
stdinLn type-checks as a
Producer, which has the following
type:
Producer String IO () = Proxy Void () () String IO ()
The fourth type variable (the output) does not match. For an
Effect this
type variable should be closed (i.e.
Void), but
stdinLn has a
String
output, thus the type error:
Couldn't match expected type `Void' with actual type `String'
Any time you get type errors like these you can work through them by expanding out the type synonyms and seeing which type variables do not match.
You may also consult this table of type synonyms to more easily compare them:
type Effect = Proxy Void () () Void type Producer b = Proxy Void () () b type Consumer a = Proxy () a () Void type Pipe a b = Proxy () a () b type Server b' b = Proxy Void () b' b type Client a' a = Proxy a' a () Void type Effect' m r = forall x' x y' y . Proxy x' x y' y m r type Producer' b m r = forall x' x . Proxy x' x () b m r type Consumer' a m r = forall y' y . Proxy () a y' y m r type Server' b' b m r = forall x' x . Proxy x' x b' b m r type Client' a' a m r = forall y' y . Proxy a' a y' y m r | http://hackage.haskell.org/package/pipes-4.0.1/docs/Pipes-Tutorial.html | CC-MAIN-2015-14 | refinedweb | 5,587 | 55.68 |
WebCT
Login | Faculty
Resource Page | Faculty Tutorial Page
Manage Students is a powerful tool for maintaining student information
and grades in WebCT. The functions in Manage Students that have to do
with maintaining student grades are often referred to as the Gradebook.
Used together with the My Grades tool, the Gradebook is the most popular
tool for students. It can also save a lot of time for faculty by creating
a safe, secure method of allowing students to view their grades and receive
feedback. Below are some of the functions that can be performed in the
Gradebook.
II. Column Functions
IV. Setup Column
VI. Export - Import Data
I. Add Columns
When you first access Manage Students in your course shell, you will
find columns titled Last Name, First Name, User ID, Midterm Grade and
Final Grade. WebCT will automatically add a column in Manage Students
for each quiz and assignment that you add to your course. If you find
you have a need for another column you also have the ability to add columns.
For example, a percentage of the students grade is based on their participation
in the online discussion. If you would like the students to be able view
their discussion score in WebCT you could create a column that you release
to the students for this score. (Note:
See Release Information for additional
information regarding students viewing grades in Manage Students.)
From the Manage Students page:
Select Manage columns from the Organize actions and click Go
button
Columns page displays
Select the Add column button from Organize actions
The Add Column screen displays
Type name of the column in the Label field. Select column type desired.
See Column Types for more information.
Click the Add button
New column displays in the Columns page
back to the top
Different columns store different types of information. For example,
text is stored in an alphanumeric column and data that needs to be manipulated
from other columns is stored in a calculated column. You can select the
column type when you first create a column, or you can convert an existing
column to a different type.
The following column types are available:
On the Columns page, the type for each column is displayed by symbol
in the Type row. Column symbols are defined in the Legend of Column
Types below the table.
Note: You cannot create a Quiz or Assignment
column. These column types are generated automatically when Quizzes or
Assignments are created in the course.
1. Alphanumeric
Alphanumeric columns consist of letters and/or numbers (e.g., section
title, letter grades). Note: Information
must not exceed one line. To add information that occupies several lines
and contains hard returns, see Text
Columns.
2. Numeric
Numeric columns contain strictly numerical information (e.g., numeric
grade, phone number).
3. Letter Grade
Letter grades that correspond to the numeric grades in a specific
numeric, calculated, or quiz column in the Student table are generated
using the letter grade column and then selecting a numeric range for each
letter grade. See Setup Column for
additional information about selecting the numeric ranges for the letter
grades.
4. Selection. See Setup
Column for additional information about creating the selection box
drop-down list.
5. Calculated
You can calculate grades by entering a mathematical formula to make
calculations based on the values in numeric columns. The calculated column
displays the results of the calculation. Note:
Unlike alphanumeric or numeric columns, you do not enter information directly
into the calculated column. See Setup
Column for additional information about entering mathematical formulas.
6. Text
Text columns can have letters and numbers and occupy several lines
(e.g., addresses or comments).
II. Column Functions
Once a column is created you may find that you need to change the position
the column appear in the student table, the column is no longer needed
or the column alignment needs to be changed. The following column functions
will help to customize the appearance of the columns in the Student table.
If you find that a column's position in the Student table is not convenient
you can move it to another position. For example, your Discussion Grade
letter grade column that is based on the data from the Discussion Score
numeric column is separated by two other columns. Although the two columns
do not need to be next to each other it is more convenient for them to
be adjacent.
Select the checkbox of the column you would like to move in the Select
row.
Select the number of spaces you would like to move the column in the
Move item left (right) Organize action and click the Go
button
Column page displays with the column moved as requested
If you find that you have created a column that you no longer need, the
delete function will allow you to remove a column from the Student table.
Note: You will be unable to delete a Quiz
or Assignment column that WebCT automatically creates until the Quiz or
Assignment that the column is associated with is deleted.
Caution:
You will not be able to "undo" this process. The column and
all the data in it will be lost forever if the column is deleted. If you
are unsure, make a backup of the course before removing the column. See
Restoring and Resetting
a WebCT course into CE 4.1 for assistance with making a backup of
your course.
Select the checkbox of the column(s) you would like to delete in the
Select row.
Select Delete columns Organize action
Delete column confirmation displays
Click OK button
Columns page displays without column that was deleted.
The Student table can be sorted by a column or a combination of columns.
For example, you can sort student records so they appear in alphabetical
order according to surname.
From Manage Students page:
Click the column name that you want to use to sort the records
The screen refreshes and the column that you sorted appears with a small
upward pointing arrow beside the column name
Note: This column is now the primary sort
key. If a second column is clicked, that column now becomes the primary
key, and the other column (the one you sorted previously) becomes the
secondary sort key. For example, if you want a listing sorted by last
name, click the Last Name column. But if you wanted that listing to be
sorted by User ID as the secondary key you would first need to click the
User ID column, and then click the Last Name column.
After a column is created you may find that you need to change it. For
example, there is a better title for the column, you need to change the
column type or the alignment is incorrect. Each column has attributes
that can be modified.
The following column attributes are available:
Column title
After creating a column you may find that there is a typo in the column
title or that it just doesn't correctly describe the data in the column.
Use this function to change the column title. Note:
You cannot rename the Last Name, First Name, and User ID columns. Also,
renaming a quiz in the Student table does not change the quizs name
on the Quiz page.
Select the checkbox of the column you would like rename
Type the new column name in the Change column label: field and
click the Go button
The screen refreshes and the Column page displays with the newly name
column
Column Type
You may find that you have created a column with the incorrect column
type. For example, you wanted WebCT to calculate a letter grade based
on the data in a numeric column but when you created the column you chose
an alphanumeric column instead of a letter grade column. You can convert
the following column types:
Note: See Column
Types for additional information regarding the types of columns available
and how each column type is used.
Select the checkbox of the column you would like to convert the type
Select Convert column type in the Organize actions
Column conversion page displays
Select the new column type desired and click convert. See Column
types for information about the different column types available.
The Convert Confirmation page displays. Verify that the data conversion
looks accurate and click Convert.
Column page displays with new column type
back to the top
Alignment
You have the choice to align data in a column to the left, right or
in the center. For example, it is often easier to read numeric data when
it is aligned to the right. Use this function to align data in columns
to your personal preference.
Select the checkbox of the column that needs the alignment changed
Select the new alignment option desired and click the Go button
The screen refreshes the new alignment is displayed in the Alignment
row
Invisible/Visible in the Student Table
To temporarily reduce the size of the Student table, you can make
a column invisible. Note: The column is
hidden in the Student table only. If you have released the column to the
students, the column is still displayed to students in the My Grades tool.
By default, all columns are visible in the Student table.
From the Manage Students page:
Select the checkbox of the column that you would like to hide in the
Student table
Select Yes to hide the column in the Student table or No to make the
column visible in the Student table. Click the Go button.
The screen refreshes with the new setting in the Hidden row.
Note: You will always be able to view
columns while on the Columns page. It is not until you return to the Manage
Students page that a hidden column is not visible in the Student table.
Decimal Place
You can set the number of decimal places to display in numeric and
calculated columns. You have the choice of displaying 0,1,2 or All decimal
places.
Select the checkbox of the column(s) that you would like to change the
decimal place
Select the decimal place you would like to show in the column(s) and
click the Go button
The screen refreshes with the new setting in the Decimal row
You may find that you are always changing the same column attributes.
For example, the WebCT default for numeric columns is right alignment
and display 2 decimal places. If your personal preference is to have all
numeric columns aligned left and display 1 decimal place, use this function
to change the attributes that the column receives when it is first created.
Select Set column defaults button
Change the desired column attributes for each column type and click the
Update button
Columns page displays. The next time a new column is added it will have
the new attributes selected
III. Edit Contents
This feature allows you to enter or edit the contents of an entire column
at once.
Click the Edit link in the column you would like to edit.
Edit Column Values page displays
Put in the column data or select the correct option from the drop-down
menu and click the Update button
Manage Students page displays with the new column data
Letter Grade, Selection Box and Calculated column types
all require some setup after the columns are created in order for them
to work correctly.
In order for WebCT to assign the correct letter grade for a each student
score, you must establish the grading scheme you would like it to follow.
From the Manage Students page:
Select the Letter Grade column that needs to be setup
Select the Setup column button from the Organize actions
Letter Grade Editor displays
Note: You can also access the Letter
Grade Editor by selecting the Grading Scheme link that appears
right below the column title on the Manage Students page
Select the column that you want to apply the grading scheme to
Enter the new Grading Scheme for the column you selected
Change the lower limit and/or letter grade of each Range %:
Under Lower Limit %, enter the minimum percentage that a student must
achieve to receive the corresponding letter grade. For example, if a student
must achieve a minimum of 60% to receive a letter grade of "C,"
you would enter 60 as the lower limit. That means, a student who achieves
59.9999% will receive the next letter below a "C."
Change the letter grade:
Under Letter Grade, enter the new letter grade.
Update the Range % field:
Click Refresh ranges.
Add a grading range:
To add a row below a particular range, select the range's check box, and
click Add Row. The new row is added below the existing range. Note:
To add more than one row at a time, select multiple check boxes.
Under Lower Limit %, enter the new lower limit.
Under Letter Grade, enter the new letter grade.
Click Refresh ranges.
Delete a range:
Select its check box, click Delete Row, and then click Refresh ranges.
Note: If you will be using this same
grading scheme throughout the course, you can save it as the default.
After clicking the Set as course default button, all Letter Grade
columns added to the course will have this grading scheme.
After making all desired changes, click the Update button
Manage Students screen displays with letter grades displaying based on
criteria just established.
You must create the selection box drop-down list options for a selection
box. For example, you are using a selection box column to identify presentation
group names for each student. You must add the group names to selection
box list so that you can select the correct group name for each student.
Selection Box Editor displays
Note: You can also access the Selection
Box Editor by selecting the Selection link that appears right below
the column title on the Manage Students page
Enter the options you would like to appear in the selection box. Click
the Update button
Manage Students screen displays with selection box options available
for selecting. Note: See Edit
Contents for assistance with entering the data for the selection box.
back to the top
You must enter the mathematical formula that you would like WebCT to
use to calculate a student score. The results of the calculation are displayed
in the calculated column. For example, a students final grade is as follows:
The mathematical formula used to calculate the students final score must
be setup in the calculated column so that WebCT can display the final
score in the column.
Select the Calculated Grade column that needs to be setup
Calculation Editor displays
Note: You can also access the Calculation
Editor by selecting the Formula link that appears right below the
column title on the Manage Students page
Enter the desired formula
Guidelines on using the Calculation Editor:
Click the Update button when your formula is complete
Note: See Export - Import Data
for information on externally calculating complicated formulas and then
returning the data to WebCT.
The Manage Students page displays with the resulting calculation from
the Calculated column
Note: If there is an error in the formula
used in a Calculated column, all columns in the Student table can be affected.
Be sure to carefully review your formula as well as the resulting calculation.
V. Release Information
If you want students to see their own information, such as quiz results
and assignment grades, you must first add the My Grades tool to your course,
and then release the column containing the information. You can also release
statistics for these released
columns. (Note: See Add
Page or Tool for assistance adding the My Grades tool to your course.
Select the column(s) that you would like the students to be able to view
in the My Grades tool
Select Yes in the Release columns option and click the Go
button
The screen refreshes with Yes displaying in the Released row
You can also specify how students view statistics for released columns
in My Grades. The statistics available are graded out of, number of records,
highest grade, lowest grade, mean grade, median grade and standard deviation.
You can specify how students view statistics for the following columns:
From the Columns page:
Select the column(s) that you would like to specify how students view
statistics in the My Grades tool
Select desired Show statistics option and click the Go
button
The screen refreshes with option selected displaying in the Statistics
row
For complicated grade calculations, it is sometimes easier to use external
software. Data from the Student table can be exported from WebCT and imported
into software of choice where the complicated formulas can be calculated.
The data can then be imported back into WebCT to be released to students.
Select Download from Options: Records Actions and click the Go
button
Download Student Records page displays
Select how you would like the records delimited and click Download
button
You will receive a confirmation screen that you are downloading a file
to your local computer. A dialog box will appear requesting the location
you would like to save the file. (The look of this confirmation screen
and dialog box will vary depending on the operating system and browser
you are using.) Name the file, select the location and click the Save
button.
The Student table is saved to your local computer as a text file.
Open the file in your software of choice (e.g. Excel, Lotus 1-2-3, Quattro
Pro) and create the formulas needed to calculate your student grades.
First, prepare the text file. You can import student data from a text
(.txt or .csv) file. In the first line, enter the field names to be updated
or created. Note: Always include a field
for User ID, as it uniquely identifies each student record. It is recommended
that only User ID and new data to be imported be included in the
import file to reduce import errors.
Separate each field with a field separator consisting of a comma, a space,
or a tab. If you are importing data for existing student records:
Enter the data for each student on a separate line. The data must be entered
in the order specified in the first line and must be separated by the
same field separator.
Sample text file
User ID,Midterm grade
00000001,B+
00000002,A-
00000003,C
00000004,B+
Note: Saving the file as a .csv (comma
separated values) will take care of the above formatting for you.
Note:
Most student MSU WebCT IDs contain leading zeroes. Often, these leading
zeroes are"dropped" when editting the file in external software
(e.g. Excel, Lotus 1-2-3, Quattro Pro). In order to ensure that the data
is imported correctly and to avoid error messages when uploading the data,
you will need to take the necessary steps to keep the leading zeroes in
the UserID column. Most of the time, this involves treating the column
containing the UserID as a text column. You will want to verify this is
how it is accomplished in your software of choice.
Select Add/Import students from the Options:Records Actions and
click the Go button
Add Students page displays
Click the Browse button to open the WebCT File Browser
The WebCT File Browser displays. Click the Browse button to locate
the file on your local computer to upload to WebCT
Choose file dialog box displays. Locate and select the local file you
would like to upload and click the Open button
The File Browser page displays with the selected file in the filename
field. Note: If you would like to verify
you have selected the correct file you can click within the Filename field
and use your arrow keys to scroll through the field.
Select the desired folder from the Upload to: drop-down and click
the Upload button
The uploaded file displays and is automatically selected. Click the Add
selected button
Add Students page displays with the selected filename in the Import From
File - Filename field. Select delimiter from Separator: drop-down
and click the Import button
Import Confirmation: New Column Resolution screen displays if
the file you are importing has a column that does not exist in the WebCT
Student table. Select how you would like WebCT to resolve the column and
click the Continue button. Note:
If you are importing more than one new column you will receive the Import
Confirmation: New Column Resolution screen for each new column you are
importing. Select how you would like WebCT to resolve the column and click
the Continue button each time you receive the Import Confirmation:
New Column Resolution screen.
Note: If the file you are importing contains
all the same columns as the WebCT Student table you will not see the Import
Confirmation: New Column Resolution screen.
Import Confirmation: Field Name Resolved screen displays. Verify WebCT
is correctly resolving the field names and click the Continue button.
Note: You can only import data into numeric
or alphanumeric column types. If you are trying to import data into a
column that exists in the WebCT student table and WebCT is not allowing
you to select that column, verify that the column in the Student table
is either numeric or alphanumeric.
Import Confirmation: Final Confirmation screen displays. Select the column
type desired for new fields and click the Continue button.
Manage Students page displays with the new column and/or data. | http://btc.montana.edu/distributed/webct41/gradebook.htm | crawl-001 | refinedweb | 3,577 | 59.33 |
Editing Unity script variables in the Inspector – The case for Encapsulation & [SerializeField]
If you’ve read many Unity tutorials, you may already know that it’s very easy to edit your script fields in the Unity inspector.
Most Unity tutorials (including on the official page) tell you that if you have a MonoBehaviour attached to a GameObject, any public field can be edited.
While that does technically work, I want to explain why it’s not the best way to setup your scripts, and offer an alternative that I think will help you in the future.
In this article, you’ll learn how to use proper Encapsulation while still taking full advantage of the Unity Inspector.
Take a look at this “Car.cs” script.
using UnityEngine; public class Car : MonoBehaviour { public Tire FrontTires; public Tire RearTires; public Tire FrontRightTire; public Tire FrontLeftTire; public Tire RearRightTire; public Tire RearLeftTire; private void Start() { // Instantiate Tires FrontRightTire = Instantiate(FrontTires); FrontLeftTire = Instantiate(FrontTires); RearRightTire = Instantiate(RearTires); RearLeftTire = Instantiate(RearTires); } }
If you look at the Start method, you can tell that the fields “FrontTires” & “RearTires” are referring to prefabs that will be be used to instantiate the 4 tires of the car.
Once we’ve assigned some Tire prefabs, it looks like this in the Inspector.
In play mode, the Start method will instantiate the 4 actual tires on our car and it’ll look like this.
Problem #1 – Confusion
The first thing you might realize is that there could be some confusion about which fields to assign the prefab to.
You’ve just seen the code, or in your own projects, perhaps you’ve just written it, and it may seem like a non-issue.
But if your project ever grows, it’s likely others will need to figure out the difference, and to do so, they’ll need to look at the code too.
If your project lasts more than a few days/weeks, you also may forget and have to look back through the code.
Now you could solve this with special naming. I’ve seen plenty projects where the “Prefab” fields had a prefix or suffix like “Front Tires Prefab”.
That can also work, but then you still have 4 extra fields in there that you have to read every time. And remember, this is a simple example, your real classes could have dozens of these fields.
Fix #1 – Let’s Use Properties for anything public
To resolve this, let’s change the entries we don’t want to be editable into Properties.
Now let’s change the “Car.cs” script to match this.Now let’s change the “Car.cs” script to match this.
Microsoft recommends you make your fields all private and use properties for anything that is public. There are plenty of benefits not described in this article, so feel free to read in detail from Microsofts article Fields(C# Programming Guide)
using UnityEngine; public class Car : MonoBehaviour { public Tire FrontTires; public’s what it looks like in the Inspector
With that change, you may be thinking we’ve resolved the issue and everything is good now.
While it’s true that confusion in the editor is all cleared up, we still have one more problem to address.
That problem is lack of Encapsulation.
Problem #2 – No Encapsulation
“In general, encapsulation is one of the four fundamentals of OOP (object-oriented programming). Encapsulation refers to the bundling of data with the methods that operate on that data.”
There are countless articles and books available describing the benefits of encapsulation.
The key thing to know is that properly encapsulated classes only expose what’s needed to make them operate properly.
That means we don’t expose every property, field, or method as public.
Instead, we only expose the specific ones we want to be accessed by other classes, and we try to keep them to the bare minimum required.
Why?
We do this so that our classes/objects are easy to interact with. We want to minimize confusion and eliminate the ability to use the classes in an improper way.
You may be wondering why you should care if things are public. Afterall, public things are easy to get to, and you know what you want to get to and will ignore the rest.
But remember, current you will not be the only one working on your classes.
If your project lasts beyond a weekend, you need to think about:
- other people – make it hard for them to misuse your classes.
- and just as important, there’s future you.
Unless you have a perfect memory, good coding practices will help you in the future when you’re interacting with classes you wrote weeks or months ago.
Problem #2 – The Example
Let’s look at this “Wall” script now to get an idea of why proper encapsulation is so important.
using UnityEngine; public class Wall : MonoBehaviour { public void Update() { if (Input.GetButtonDown("Fire1")) DamageCar(FindObjectOfType<Car>()); } public void DamageCar(Car car) { car.FrontTires.Tread -= 1; car.RearTires.Tread -= 1; } }
The “DamageCar” method is supposed to damage all of the wheels on the car by reducing their Tread value by 1.
Do you see what’s wrong here?
If we look back to the “Car.cs” script, “FrontTires” & “RearTires” are actually the prefabs, not the instantiated tires the car should be using.
In this case, if we execute the method, we’re not only failing to properly damage our tires, we’re actually modifying the prefab values.
This is an easy mistake to make, because our prefab fields that we we shouldn’t be interacting with aren’t properly encapsulated.
Problem #2 – How do we fix it?
If we make the “FrontTires” & “RearTires” private, we won’t be able to edit them in the inspector… and we want to edit them in the inspector.
Luckily, Unity developers knew this would be a need and gave us the ability to flag our private fields as editable in the inspector.
[SerializeField]
Adding the [SerializeField] attribute before private fields makes them appear in the Inspector the same way a public field does, but allows us to keep the fields properly encapsulated.
Take a look at the updated car script
using UnityEngine; public class Car : MonoBehaviour { [SerializeField] private Tire _frontTires; [SerializeField] private you see we no-longer expose the “FrontTires” and “RearTires” fields outside of our class (by marking them private).
In the inspector, we still see them available to be assigned to.
Now our problems are solved and our class is properly encapsulated!
You may also notice that the casing on them has been changed. While this is not required to properly encapsulate your objects, it is very common practice in the C# community to denote private fields with camel case prefixed by an underscore. If you don’t like the underscore, consider at least using camel casing for your private fields and reserve pascal casing for public properties.
Video Version
Project Download
Want the source for this project to try it out yourself? Here it is: | https://unity3d.college/2016/02/15/editing-unity-variables-encapsulation-serializefield/ | CC-MAIN-2020-29 | refinedweb | 1,173 | 61.36 |
I'm trying to sort a list of names in alphabetical order. I thought that I would be able to do it on my LINQ statement but this didn't turn out the case and it brought me an error. Does anybody know why this is the case and how to fix this?
Here is my join:
public IQueryable<Supplier> GetAllSuppliersByClientWithClaims(int ClientID) { return (from s in db.Suppliers where s.ClientID == ClientID join h in db.Headers on new { a = s.ClientID, b = s.SupplierID } equals new { a = h.ClientID, b = h.SupplierID } orderby s ascending select s); }
Here is the dropdown for the view:
@Html.DropDownListFor(m => m.ReportTypeOptions.First().ReportID, new SelectList(Model.ReportTypeOptions, "ReportID", "ReportName"), "Select Report", new { @class = "GRDropDown", @id = "ReportDD", onchange="myFunction()"}) | http://www.howtobuildsoftware.com/index.php/how-do/dgl/c-linq-dbsortclause-expressions-must-have-a-type-that-is-order-comparable-parameter-name-key | CC-MAIN-2019-39 | refinedweb | 128 | 54.49 |
NLTK revisited: why
When you start working with some text-analysis project, sooner or later you will encounter the following problem: Where to find sample text, how to get resources, where should I start. When I first had a contact (Polish language post) with NLP I didn’t appreciate the power that lies behind the NLTK - the Python first-choice library for NLP. However, after several years, I see that I could use it earlier. Why? NLTK comes with an easy access to various sources of text. And I am going to show you what particularly I like and what caught my attention when studying the first 3 chapters of an official book.
But the main business is what? I would like to (finally) build some Suggestion tool that will allow providing some Virtual Assistant to help in decision making progress.
Bundled resources available
brown and its categories
NLTK comes with various corpora, so big packs of text. You can utilize them as shown in the following example. All you need to do is to download appropriate corpus and start exploring that. Let us see now.
import nltk nltk.download('brown') files = nltk.corpus.brown.fileids() print(f"'Brown' corpus contain {len(files)} files") [nltk_data] Downloading package brown to /root/nltk_data... [nltk_data] Unzipping corpora/brown.zip. 'Brown' corpus contain 500 files
In this corpus you will find different text categorized into categories. So it nicely fits into classification area of machine learning. Following there are categories of these texts together with some samples.
print("'brown' contains following categories %s" % nltk.corpus.brown.categories()) brown_adventure = nltk.corpus.brown.sents(categories='adventure')[0:5] brown_government = nltk.corpus.brown.sents(categories='government')[0:5] print("Following we have some sentences from 'adventure' category:") for sent in brown_adventure: print(" > "+ " ".join(sent)) print("And here we have some sentences from 'government' category:") for sent in brown_government: print(" > "+ " ".join(sent)) 'brown' contains following categories ['adventure', 'belles_lettres', 'editorial', 'fiction', 'government', 'hobbies', 'humor', 'learned', 'lore', 'mystery', 'news', 'religion', 'reviews', 'romance', 'science_fiction'] Following we have some sentences from 'adventure' category: > Dan Morgan told himself he would forget Ann Turner . > He was well rid of her . > He certainly didn't want a wife who was fickle as Ann . > If he had married her , he'd have been asking for trouble . > But all of this was rationalization . And here we have some sentences from 'government' category: > The Office of Business Economics ( OBE ) of the U.S. Department of Commerce provides basic measures of the national economy and current analysis of short-run changes in the economic situation and business outlook . > It develops and analyzes the national income , balance of international payments , and many other business indicators . > Such measures are essential to its job of presenting business and Government with the facts required to meet the objective of expanding business and improving the operation of the economy . > Contact > For further information contact Director , Office of Business Economics , U.S. Department of Commerce , Washington 25 , D.C. .
There is a very important component of NLP in the brown corpus, namely Part Of Speech tagging (POS). How is it organized? Let us see the example from one of the sentences printed just minutes ago.
adv_words = nltk.corpus.brown.words(categories='adventure') print(adv_words[:10]) adv_words = nltk.corpus.brown.tagged_words(categories='adventure') print(adv_words[:10]) ['Dan', 'Morgan', 'told', 'himself', 'he', 'would', 'forget', 'Ann', 'Turner', '.'] [('Dan', 'NP'), ('Morgan', 'NP'), ('told', 'VBD'), ('himself', 'PPL'), ('he', 'PPS'), ('would', 'MD'), ('forget', 'VB'), ('Ann', 'NP'), ('Turner', 'NP'), ('.', '.')]
Yes! It is tagged and ready to be analyzed. What does these symbols mean? They are symbols of parts of speech, nicely described here. F.e.
NP stands fo Proper Noun and
VBD is Verb, past tense.
sentiments
One of the areas where NLP is used is a Sentiment Analysis, playing an important role in digital marketing. Imagine how nice it is to process vast amount of opinions and instantly recognize whether the product is approved or rejected by a community. Seeing trends is also possible with that. So what is the one of the NLKT bundled tools to deal with sentiments? This is
opinion_lexicon
from nltk.corpus import opinion_lexicon nltk.download('opinion_lexicon') negatives = opinion_lexicon.negative()[:10] positives = opinion_lexicon.positive()[:10] print("If you find some negative words, here you are: %s" % negatives) print("But let us try to see the positive side of life! Described with these words: %s" % positives) [nltk_data] Downloading package opinion_lexicon to /root/nltk_data... [nltk_data] Unzipping corpora/opinion_lexicon.zip. If you find some negative words, here you are: ['2-faced', '2-faces', 'abnormal', 'abolish', 'abominable', 'abominably', 'abominate', 'abomination', 'abort', 'aborted'] But let us try to see the positive side of life! Described with these words: ['a+', 'abound', 'abounds', 'abundance', 'abundant', 'accessable', 'accessible', 'acclaim', 'acclaimed', 'acclamation']
and much more…
There is a lot of other corpora available there. This is not the task of this post/notebook to repeat sth what one can read him/herself in the official papers as here. With NLTK after downloading some material, you have access to such materials as:
- multilingual corpora (like _Universal Declaration of Human Rights_with 300+ languages)
- lexical resources (WordNet)
- pronouncing dictionaries (CMU Pronouncing Dictionary)
Lots of things to browse. But it is worth just to take a look on some of them, to have this feeling “I saw it somewhere..” when facing some NLP task.
Fetch anything
If attached resources will not be enough for you, just start using different resources.
requests
Python has libraries for anything so this is possible to use available NET resources in your app just having their URL available.
import requests url = '' resp = requests.get(url) blog_text = resp.text blog_text[:200] '\r\n<!DOCTYPE html>\r\n<html lang="en-US" prefix="og: fb: video: ya:">\r\n<head>\r\n <meta charset="UTF'
RSS
Ready to consume RSS? With Python, nothing is easier. You can create an instance of
nltk.Text having RSS feed as the input. This is a snippet how it could be done
!pip install feedparser import feedparser ' + entry.title) Collecting feedparser Successfully built feedparser Installing collected packages: feedparser Successfully installed feedparser-5.2.1 Look ma! I've just parsed RSS from a very nice Polish blogging platform. It has a title JVMBloggers And there we go with 5 exemplary entries: > Odpowiedź: 42 > Thanks for explaining the behaviour of dynamic (partition overwrite) mode. > Non-blocking and async Micronaut - quick start (part 3) > Strefa VIP > Mikroserwisy – czy to dla mnie?
cleaning
When your HTML doc is fetched you probably have a doc that is full of HTML mess and there is no added value of having
<body> in your text. So there is some clean-up work that can be done and there are tools that can make it happen, but they are not part of nltk package. So let us have BeautifulSoap as an example.
from bs4 import BeautifulSoup soup = BeautifulSoup(blog_text) content = soup.find("div", {'class':"blog-content"}) text_without_markup = content.get_text()[:100] text_without_markup '\n\n\nWhat’s New for Apache Spark on Kubernetes in the Upcoming Apache Spark 2.4 Release\n\n\nSeptember 26'
Normalization - languages are not easy
Your language is not easy. If you are Polish like me, it is soooo true. But even English and other European languages add complexity to NLP. Why? Words have different forms and we have to conform grammar rules to be respected when analyzing text by the machine. Taking English
going word as an example, you mean
go verb, but there is also its lemma
-ing that has to be recognized and skipped for the moment of analysis. What are 3 processes that have in-built support in NLTK? Read the following.
Tokenization
Text consists of sentences and these contain words. Often times, we would like to have words presented as vectors as we will apply some algebra to that. The simplest approach of tokenization can be implemented as follows, however, there are some limitations. You can use a variety of tokenizers available in
nltk.tokenize package.
# write tokenizer yourself? import re text = "Two smart coders are coding very quickly. Why? The end of the sprint is coming! The code has to be finished!" tokens_manual = re.split(r"[\s+]", text) print("Tokens taken manually %s " % tokens_manual) # or maybe choose the one from the abundance in `nltk` import nltk from nltk.data import load from nltk.tokenize.casual import (TweetTokenizer, casual_tokenize) from nltk.tokenize.mwe import MWETokenizer from nltk.tokenize.punkt import PunktSentenceTokenizer from nltk.tokenize.regexp import (RegexpTokenizer, WhitespaceTokenizer, BlanklineTokenizer, WordPunctTokenizer, wordpunct_tokenize, regexp_tokenize, blankline_tokenize) from nltk.tokenize.repp import ReppTokenizer from nltk.tokenize.sexpr import SExprTokenizer, sexpr_tokenize from nltk.tokenize.simple import (SpaceTokenizer, TabTokenizer, LineTokenizer, line_tokenize) from nltk.tokenize.texttiling import TextTilingTokenizer from nltk.tokenize.toktok import ToktokTokenizer from nltk.tokenize.treebank import TreebankWordTokenizer from nltk.tokenize.util import string_span_tokenize, regexp_span_tokenize from nltk.tokenize.stanford_segmenter import StanfordSegmenter from nltk.tokenize import word_tokenize nltk.download('punkt') tokens = word_tokenize(text) print(tokens) Tokens taken manually ['Two', 'smart', 'coders', 'are', 'coding', 'very', 'quickly.', 'Why?', 'The', 'end', 'of', 'the', 'sprint', 'is', 'coming!', 'The', 'code', 'has', 'to', 'be', 'finished!'] [nltk_data] Downloading package punkt to /root/nltk_data... [nltk_data] Unzipping tokenizers/punkt.zip. ['Two', 'smart', 'coders', 'are', 'coding', 'very', 'quickly', '.', 'Why', '?', 'The', 'end', 'of', 'the', 'sprint', 'is', 'coming', '!', 'The', 'code', 'has', 'to', 'be', 'finished', '!']
Stemming
So one of the tasks that can be done is stemming, so getting rid of the words ending. Let us see what does the popular stemmer does to the text we already tokenized before.
porter = nltk.PorterStemmer() tokens_stemmed = [porter.stem(token) for token in tokens] print(tokens_stemmed) ['two', 'smart', 'coder', 'are', 'code', 'veri', 'quickli', '.', 'whi', '?', 'the', 'end', 'of', 'the', 'sprint', 'is', 'come', '!', 'the', 'code', 'ha', 'to', 'be', 'finish', '!']
Lemmatization
If stemming is not enough, there has to be lemmatization done, so your words can be classified against the real dictionary. Following there is an example of running this for our text sample.
nltk.download('wordnet') wnl = nltk.WordNetLemmatizer() lemmas = [wnl.lemmatize(token) for token in tokens] print(lemmas) [nltk_data] Downloading package wordnet to /root/nltk_data... [nltk_data] Unzipping corpora/wordnet.zip. ['Two', 'smart', 'coder', 'are', 'coding', 'very', 'quickly', '.', 'Why', '?', 'The', 'end', 'of', 'the', 'sprint', 'is', 'coming', '!', 'The', 'code', 'ha', 'to', 'be', 'finished', '!']
Summary
So where did we go? I have just analyzed NLKT book available online, chapters no 2 and 3. I gave a try for few tools from the big number available in this Natural Language Toolkit. Now there is a time to explore other chapters over there. Stay tuned. I have to build my Suggestion tool.
All of these was created as a Jupyter notebook.
Discussion | https://practicaldev-herokuapp-com.global.ssl.fastly.net/lukaszkuczynski/nltk-revisited-13ea | CC-MAIN-2021-04 | refinedweb | 1,754 | 51.65 |
I started learning class's and I made this one program and it had so many errors on it. I managed to get them down to 2. I can't seem to figure out what the problem is though. Can anyone help me out ?
Code:#include <iostream> using namespace std; class Cat { public: int setAge(int age); int setWeight(); int getAge(); int getWeight(); void Meow(); private: int itsAge; int itsWeight; }; //=====Starting of Age=========// int Cat::setAge(int age) { itsAge = age; } int Cat::getAge() { return itsAge; } //=========End of Age===========// //=========Meow()=============// void Cat::Meow() { cout <<"Meow" <<endl; } //======End of Meow============// //======Starting Weight==========// int Cat::setWeight(int weight) { itsWeight = weight; } int Cat::getWeight() { return itsWeight; } //==========End of Weight===========// //===-=-=-=-=-=-=-=-=-Starting Main=-=-=-=-=-=-=-=-=// int main() { int age,weight; cout <<"How old is your cat?" <<endl; cin >>age; cout <<"How much does your cat weigh in pounds?" <<endl; cin >>weight; Cat Frisky; Frisky.setAge(age); Frisky.Meow(); cout <<"The cat is " << Frisky.getAge() <<" years old." <<endl; Frisky.Meow(); Frisky.setWeight(weight); cout <<"The cat is " <<Frisky.getWeight() <<" pounds." <<endl; Frisky.Meow(); return 0; } | http://cboard.cprogramming.com/cplusplus-programming/43404-class-help-please.html | CC-MAIN-2016-07 | refinedweb | 176 | 82.54 |
:
How can you write well-structured, readable programs that you and others will be able to reuse:
>>>>> bar = foo
>>>>>>> bar 'Monty'>>> bar 'Monty'
This behaves exactly as expected. When we write
bar = foo in the.
>>> foo = ['Monty', 'Python'] >>> bar = foo
>>> foo[1] = 'Bodkin'>>> foo[1] = 'Bodkin'
>>> bar ['Monty', 'Bodkin']>>> bar ['Monty', 'Bodkin']
The line
bar = foo
does not copy the contents of the
variable, only its “object reference.” To understand what is going on
here, we need to know how lists are stored in the computer’s memory.
In Figure (Functions: The Foundation of Structured Programming).
fooand
barreference the same location in the computer’s memory; updating
foowill also modify
bar, and vice versa.
Let’s experiment some more, by creating a variable
empty holding the empty list, then using it
three times on the next line.
>>> empty = [] >>> nested = [empty, empty, empty] >>> nested [[], [], []] >>> nested[1].append('Python') >>> nested [['Python'], ['Python'], ['Python']]
Observe that changing one of the items inside our nested list of lists changed them all. This is because each of the three elements is actually just a reference to one and the same list in memory.:
>>> nested = [[]] * 3 >>> nested[1].append('Python') >>> nested[1] = ['Monty'] >>> nested [['Python'], ['Monty'], ['Python']]:
>>> size = 5 >>> python = ['Python'] >>> snake_nest = [python] * size >>> snake_nest[0] == snake_nest[1] == snake_nest[2] == snake_nest[3] == snake_nest[4] True >>> snake_nest[0] is snake_nest[1] is snake_nest[2] is snake_nest[3] is snake_nest[4] True
Now let’s put a new python in this nest. We can easily show that the objects are not all identical:
>>> import random >>> position = random.choice(range(size)) >>> snake_nest[position] = ['Python'] >>> snake_nest [['Python'], ['Python'], ['Python'], ['Python'], ['Python']] >>> snake_nest[0] == snake_nest[1] == snake_nest[2] == snake_nest[3] == snake_nest[4] True >>> snake_nest[0] is snake_nest[1] is snake_nest[2] is snake_nest[3] is snake_nest[4] False
You can do several pairwise tests to discover which position
contains the interloper, but the
id() function makes detection easier:
>>> [id(snake) for snake in snake_nest] [513528, 533168, 513528, 513528, 513528]
This reveals that the second item of the list has a distinct identifier. If you try running this code snippet yourself, expect to see different numbers in the resulting list, and don’t be surprised if the interloper is in a different position.
Having two kinds of equality might seem strange. However, it’s really just the type-token distinction, familiar from natural language, here showing up in a programming language.
In the condition part of an
if statement, a non-empty string or list is
evaluated as true, while an empty string or list evaluates as
false.
>>> mixed = ['cat', '', ['dog'], []] >>> for element in mixed: ... if element: ... print element ... cat ['dog']
That is, we don’t need to say
if len(element) > 0: in the
condition.
What’s the difference between using
if...elif as opposed to using a couple of
if statements in a row? Well,
consider the following situation:
>>> animals = ['cat', 'dog'] >>> if 'cat' in animals: ... print 1 ... elif 'dog' in animals: ... print 2 ... 1:
>>> sent = ['No', 'good', 'fish', 'goes', 'anywhere', 'without', 'a', 'porpoise', '.'] >>> all(len(w) > 4 for w in sent) False >>> any(len(w) > 4 for w in sent) True
.
>>> t = 'walk', 'fem', 3
>>> t ('walk', 'fem', 3) >>> t[0]>>> t ('walk', 'fem', 3) >>> t[0]
'walk' >>> t[1:]'walk' >>> t[1:]
('fem', 3) >>> len(t)('fem', 3) >>> len(t):
>>>>> text = ['I', 'turned', 'off', 'the', 'spectroroute'] >>> pair = (6, 'turned') >>> raw[2], text[3], pair[1] ('t', 'the', 'turned') >>> raw[-3:], text[-3:], pair[-3:] ('ute', ['off', 'the', 'spectroroute'], (6, 'turned')) >>> len(raw), len(text), len(pair) (29, 5, 2).
Your Turn: Define a set,
e.g., using
set(text), and see what
happens when you convert it to a list or iterate over its
We can iterate over the items in a sequence
s in a variety of useful ways, as shown in
Table 4-1.
The sequence functions illustrated in Table 4-1 can be combined in various ways; for
example, to get unique elements of
s sorted in reverse, use
reversed(sorted(set()) and support iteration:
>>>>> text = nltk.word_tokenize(raw) >>> fdist = nltk.FreqDist(text) >>> list(fdist) ['lorry', ',', 'yellow', '.', 'Red', 'red'] >>> for key in fdist: ... print fdist[key], ... 4 3 2 1 1 1
In the next example, we use tuples to re-arrange the contents of our list. (We can omit the parentheses because the comma has higher precedence than assignment.)
>>> words = ['I', 'turned', 'off', 'the', 'spectroroute'] >>> words[2], words[3], words[4] = words[3], words[4], words[2] >>> words ['I', 'turned', 'the', 'spectroroute', 'off']
This is an idiomatic and readable way to move items inside a
list. It is equivalent to the following traditional way of doing such
tasks that does not use tuples (notice that this method needs a
temporary variable
tmp).
>>> tmp = words[2] >>> words[2] = words[3] >>> words[3] = words[4] >>> words[4] = tmp
As we have seen, Python has sequence functions such as
sorted() and
reversed() that rearrange the items of a
sequence. There are also functions that modify the
structure of a sequence, which can be handy for
language processing. Thus,
zip()
takes the items of two or more sequences and “zips” them together into
a single list of pairs. Given a sequence
s,
enumerate(s) returns pairs consisting of an
index and the item at that index.
>>> words = ['I', 'turned', 'off', 'the', 'spectroroute'] >>> tags = ['noun', 'verb', 'prep', 'det', 'noun'] >>> zip(words, tags) [('I', 'noun'), ('turned', 'verb'), ('off', 'prep'), ('the', 'det'), ('spectroroute', 'noun')] >>> list(enumerate(words)) [(0, 'I'), (1, 'turned'), (2, 'off'), (3, 'the'), (4, 'spectroroute')]
.
>>> text = nltk.corpus.nps_chat.words() >>> cut = int(0.9 * len(text))
>>> training_data, test_data = text[:cut], text[cut:]>>> training_data, test_data = text[:cut], text[cut:]
>>> text == training_data + test_data>>> text == training_data + test_data
True >>> len(training_data) / len(test_data)True >>> len(training_data) / len(test_data)
99.
>>> words = 'I turned off the spectroroute'.split()
>>> wordlens = [(len(word), word) for word in words]>>> wordlens = [(len(word), word) for word in words]
>>> wordlens.sort()>>> wordlens.sort()
>>> ' '.join(w for (_, w) in wordlens)>>> ' '.join(w for (_, w) in wordlens)
'I off the turned spectroroute''I off the turned spectroroute'
Each of the preceding previous:
>>> lexicon = [ ... ('the', 'det', ['Di:', 'D@']), ... ('off', 'prep', ['Qf', 'O:f']) ... ]; see). Note that these pronunciations are stored using a list. (Why?)
A good way to decide when to use tuples versus lists is to ask
whether the interpretation of an item depends on its position. For
example, a tagged token combines two strings having different
interpretations, and we choose to interpret the first item as the
token and the second item as the tag. Thus we use tuples like this:
('grail', 'noun'). A tuple of the
form
('noun', 'grail') would be
non-sensical, whereas tuples are immutable. In other words, lists can be modified, whereas tuples cannot. Here are some of the operations on lists that do in-place modification of the list:
>>> lexicon.sort() >>> lexicon[1] = ('turned', 'VBD', ['t3:nd', 't3`nd']) >>> del lexicon[0]
Your Turn: Convert
lexicon to a tuple, using
l
exicon =
tuple
(lexicon), then try each of the
operations, to confirm that none of them is permitted on
tuples.
We’ve been making heavy use of list comprehensions, for compact and readable processing of texts. Here’s an example where we tokenize and normalize a text:
>>>>> [w.lower() for w in nltk.word_tokenize(text)] ['"', 'when', 'i', 'use', 'a', 'word', ',', '"', 'humpty', 'dumpty', 'said', ...]
Suppose we now want to process these words further. We can do
this by inserting the preceding expression inside a call to some other
function
, but Python allows
us to omit the brackets
.
>>> max([w.lower() for w in nltk.word_tokenize(text)])
'word' >>> max(w.lower() for w in nltk.word_tokenize(text))'word' >>> max(w.lower() for w in nltk.word_tokenize(text))
'word''word' that comes
latest in lexicographic sort order—it can process the stream of data
without having to store anything more than the maximum value seen so
far.
Programming is as much an art as a science. The undisputed “bible” of programming, a 2,500 page multivolume versus, as in the following examples:
>>> cv_word_pairs = [(cv, w) for w in rotokas_words ... for cv in re.findall('[ptksvr][aeiou]', w)] >>> cfd = nltk.ConditionalFreqDist( ... (genre, word) ... for genre in brown.categories() ... for word in brown.words(categories=genre)) >>> ha_words = ['aaahhhh', 'ah', 'ahah', 'ahahah', 'ahh', 'ahhahahaha', ... 'ahhh', 'ahhhh', 'ahhhhhh', 'ahhhhhhhhhhhhhh', 'ha', ... 'haaa', 'hah', 'haha', 'hahaaa', 'hahah', 'hahaha']
If you need to break a line outside parentheses, brackets, or braces, you can often add extra parentheses, and you can always add a backslash at the end of the line that is broken:
>>> if (len(syllables) > 4 and len(syllables[2]) == 3 and ... syllables[2][2] in [aeiou] and syllables[2][3] == syllables[1][3]): ... process(syllables) >>> if len(syllables) > 4 and len(syllables[2]) == 3 and \ ... syllables[2][2] in [aeiou] and syllables[2][3] == syllables[1][3]: ... process(syllables):
>>> tokens = nltk.corpus.brown.words(categories='news') >>> count = 0 >>> total = 0 >>> for token in tokens: ... count += 1 ... total += len(token) >>> print total / count 4.2765382469:
>>> total = sum(len(t) for t in tokens) >>> print total / len(tokens) 4.2765382469:
>>> word_list = [] >>> len_word_list = len(word_list) >>> i = 0 >>> while i < len(tokens): ... j = 0 ... while j < len_word_list and word_list[j] < tokens[i]: ... j += 1 ... if j == 0 or tokens[i] != word_list[j]: ... word_list.insert(j, tokens[i]) ... len_word_list += 1 ... i += 1
The equivalent declarative version uses familiar built-in functions, and its purpose is instantly recognizable:
>>> word_list = sorted(set(tokens))
Another case where a loop counter keys of the
frequency distribution, and capture the integer-string pair in the
variables
rank and
word. We print
rank+1 so that the counting appears to start
from
1, as required when producing
a list of ranked items.
>>> fd = nltk.FreqDist(nltk.corpus.brown.words()) >>> cumulative = 0.0 >>> for rank, word in enumerate(fd): ... cumulative += fd[word] * 100 / fd.N() ... print "%3d %6.2f%% %s" % (rank+1, cumulative, word) ... if cumulative > 25: ... break ... 1 5.40% the 2 10.42% , 3 14.67% . 4 17.78% of 5 20.19% and 6 22.40% to 7 24.29% a 8 25.97% in
It’s sometimes tempting to use loop variables to store a maximum or minimum value seen so far. Let’s use this method to find the longest word in a text.
>>> text = nltk.corpus.gutenberg.words('milton-paradise.txt') >>>>> for word in text: ... if len(word) > len(longest): ... longest = word >>> longest 'unextinguishable'
However, a more transparent solution uses two list comprehensions, both having forms that should be familiar by now:
>>> maxlen = max(len(word) for word in text) >>> [word for word in text if len(word) == maxlen] ['unextinguishable', 'transubstantiate', 'inextinguishable', 'incomprehensible']:
>>> sent = ['The', 'dog', 'gave', 'John', 'the', 'newspaper'] >>> n = 3 >>> [sent[i:i+n] for i in range(len(sent)-n+1)] [['The', 'dog', 'gave'], ['dog', 'gave', 'John'], ['gave', 'John', 'the'], ['John', 'the', 'newspaper']]:
>>> m, n = 3, 7 >>> array = [[set() for i in range(n)] for j in range(m)] >>> array[2][5].add('Alice') >>> pprint.pprint(array) [[set([]), set([]), set([]), set([]), set([]), set([]), set([])], [set([]), set([]), set([]), set([]), set([]), set([]), set([])], [set([]), set([]), set([]), set([]), set([]), set(['Alice']), set([])]].
>>> array = [[set()] * n] * m >>> array[2][5].add(7) >>> pprint.pprint(array) [[set([7]), set([7]), set([7]), set([7]), set([7]), set([7]), set([7])], [set([7]), set([7]), set([7]), set([7]), set([7]), set([7]), set([7])], [set([7]), set([7]), set([7]), set([7]), set([7]), set([7]), set([7])]]
Iteration is an important programming device. It is tempting to adopt idioms from other languages. However, Python offers some elegant and highly readable alternatives, as we have seen.
Functions provide an effective way to package and reuse program
code, as already explained in More Python: Reusing Code. Example 4-1.
import re def get_text(file): """Read text from a file, normalizing whitespace and stripping HTML markup.""" text = open(file).read() text = re.sub('\s+', ' ', text) text = re.sub(r'<.*?>', ' ', text) return text preceding this example: get_text(file) Read text from a file, normalizing whitespace and stripping HTML markup.
We have seen that functions help to make our work reusable and readable. They also help make it reliable. When we reuse code that has already been developed and tested, we can be more confident that it handles a variety of cases correctly. We also remove the risk of forgetting some important step or introducing:
>>> def repeat(msg, num):
... return ' '.join([msg] * num) >>>>> repeat(monty, 3)... return ' '.join([msg] * num) >>>>> repeat(monty, 3)
'Monty Python Monty Python Monty Python''Monty Python Monty Python Monty Python':
>>> def monty(): ... return "Monty Python" >>> monty() 'Monty Python'
A function usually communicates its results back to the calling
program via the
return statement,
as we have just seen. To the calling program, it looks as if the
function call had been replaced with the function’s result:
>>> repeat(monty(), 3) 'Monty Python Monty Python Monty Python' >>> repeat('Monty Python', 3) 'Monty Python Monty Python Monty Python'()), but not both (
my_sort3()).
>>> def my_sort1(mylist): # good: modifies its argument, no return value ... mylist.sort() >>> def my_sort2(mylist): # good: doesn't touch its argument, returns value ... return sorted(mylist) >>> def my_sort3(mylist): # bad: modifies its argument and also returns it ... mylist.sort() ... return mylist
Back in Back to the Basics,:
>>> def set_up(word, properties): ...>> p = [] >>> set_up(w, p) >>> w '' >>> p ['noun']:
>>>>> word = w >>>>> w '':
>>> p = [] >>> properties = p >>> properties.append['noun'] >>> properties = 5 >>> p ['noun'] defined only whether it is a global name within the module. Finally, if that does not succeed, the interpreter checks whether the name is a Python built-in. This is the so-called LGB rule of name resolution: local, then global, then built-in.
A function can create a new force, a tuple, or an iterator (a new sequence type that we’ll discuss later).
However, often we want to write programs for later use by
others, and want to program in a defensive style, providing useful
warnings when functions have not been invoked correctly. The author of
the following
tag() function assumed that its argument would always be a
string.
>>> def tag(word): ... if word in ['a', 'the', 'all']: ... return 'det' ... else: ... return 'noun' ... >>> tag('the') 'det' >>> tag('knight') 'noun' >>> tag(["'Tis", 'but', 'a', 'scratch'])
'noun''noun'.
>>> def tag(word): ... assert isinstance(word, basestring), "argument to tag() must be a string" ... if word in ['a', 'the', 'all']: ... return 'det' ... else: ... return 'noun', as in the following:
>>> data = load_corpus() >>> results = analyze(data) >>> present(results)
Appropriate use of functions makes programs more readable and maintainable. Additionally, it becomes possible to reimplement a function—replacing the function’s body with more efficient code—without having to be concerned with the rest of the program.
Consider the
freq_words
function in Example 4-2. It updates the
contents of a frequency distribution that is passed in as a parameter,
and it also prints a list of the n most frequent
words.
def freq_words(url, freqdist, n): text = nltk.clean_url(url) for word in nltk.word_tokenize(text): freqdist.inc(word.lower()) print freqdist.keys()[:n]
>>>>> fd = nltk.FreqDist() >>> freq_words(constitution, fd, 20) ['the', 'of', 'charters', 'bill', 'constitution', 'rights', ',', 'declaration', 'impact', 'freedom', '-', 'making', 'independence']. In Example 4-3 we
refactor this function, and
simplify its interface by providing a single
url parameter.
def freq_words(url): freqdist = nltk.FreqDist() text = nltk.clean_url(url) for word in nltk.word_tokenize(text): freqdist.inc(word.lower()) return freqdist
>>> fd = freq_words(constitution) >>> print fd.keys()[:20] ['the', 'of', 'charters', 'bill', 'constitution', 'rights', ',', 'declaration', 'impact', 'freedom', '-', 'making', 'independence']
Note that we have now simplified the work of
freq_words to the point that we can do its
work with three lines of code:
>>> words = nltk.word_tokenize(nltk.clean_url(constitution)) >>> fd = nltk.FreqDist(word.lower() for word in words) >>> fd.keys()[:20] ['the', 'of', 'charters', 'bill', 'constitution', 'rights', ',', 'declaration', 'impact', 'freedom', '-', 'making', 'independence'] reimplement the function using a different method without changing this statement.
For the simplest functions, a one-line docstring is usually adequate (see Example 4 “epytext” markup language to document parameters. This
format can be automatically converted into richly structured API
documentation (see), and
includes special handling of certain “fields,” such as
@param, which allow the inputs and outputs
of functions to be clearly documented. Example 4-4
illustrates a complete docstring.
def accuracy(reference, test): """ Calculate the fraction of test items that equal the corresponding reference items. Given a list of reference values and a corresponding list of test values, return the fraction of corresponding values that are equal. In particular, return the fraction of indexes {0<i<=len(test)} such that C{test[i] == reference[i]}.
>>> accuracy(['ADJ', 'N', 'V', 'N'], ['N', 'N', 'V', 'ADJ']) 0.5 @param reference: An ordered list of reference values. @type reference: C{list} @param test: A list of values to compare against the corresponding reference values. @type test: C{list} @rtype: C{float} @raise ValueError: If C{reference} and C{length} do not have the same length. """ if len(reference) != len(test): raise ValueError("Lists must have the same length.") num_correct = 0 for x, y in izip(reference, test): if x == y: num_correct += 1 return float(num_correct) / len(reference)
This section discusses more advanced features, which you may prefer to skip on the first time through this chapter.
So far the arguments we have passed into functions have been
simple objects, such as strings, or structured objects, such as:
>>> sent = ['Take', 'care', 'of', 'the', 'sense', ',', 'and', 'the', ... 'sounds', 'will', 'take', 'care', 'of', 'themselves', '.'] >>> def extract_property(prop): ... return [prop(word) for word in sent] ... >>> extract_property(len) [4, 4, 2, 3, 5, 1, 3, 3, 6, 4, 4, 4, 2, 10, 1] >>> def last_letter(word): ... return word[-1] >>> extract_property(last_letter) ['e', 'e', 'f', 'e', 'e', ',', 'd', 'e', 's', 'l', 'e', 'e', 'f', 's', '.']
The objects
len and
last_letter can be passed around like lists
and dictionaries. Notice that parentheses are used after a function
name only if we are invoking the function; when we are simply treating
the function as an object, these are omitted.
Python provides us with one more way to define functions as
arguments to other functions, so-called lambda
expressions. Supposing there was no need to use the
last_letter() function in multiple
places, and thus no need to give it a name. Let’s suppose we can
equivalently write the following:
>>> extract_property(lambda w: w[-1]) ['e', 'e', 'f', 'e', 'e', ',', 'd', 'e', 's', 'l', 'e', 'e', 'f', 's', '.']
Our next example illustrates passing a function to the
sorted() function. When we call the latter
with a single argument (the list to be sorted), it uses the built-in
comparison function
cmp(). However,
we can supply our own sort function, e.g., to sort by decreasing
length.
>>> sorted(sent) [',', '.', 'Take', 'and', 'care', 'care', 'of', 'of', 'sense', 'sounds', 'take', 'the', 'the', 'themselves', 'will'] >>> sorted(sent, cmp) [',', '.', 'Take', 'and', 'care', 'care', 'of', 'of', 'sense', 'sounds', 'take', 'the', 'the', 'themselves', 'will'] >>> sorted(sent, lambda x, y: cmp(len(y), len(x))) ['themselves', 'sounds', 'sense', 'Take', 'care', 'will', 'take', 'care', 'the', 'and', 'the', 'of', 'of', ',', '.'] Example 4-5.
def search1(substring, words): result = [] for word in words: if substring in word: result.append(word) return result def search2(substring, words): for word in words: if substring in word: yield word print "search1:" for item in search1('zz', nltk.corpus.brown.words()): print item print "search2:" for item in search2('zz', nltk.corpus.brown.words()): print item (see the earlier
discussion of generator expressions).
Here’s a more sophisticated example of a generator which
produces all permutations of a list of words. In order to force the
permutations() function to generate
all its output, we wrap it with a call to
list()
.
>>> def permutations(seq): ... if len(seq) <= 1: ... yield seq ... else: ... for perm in permutations(seq[1:]): ... for i in range(len(perm)+1): ... yield perm[:i] + seq[0:1] + perm[i:] ... >>> list(permutations(['police', 'fish', 'buffalo']))
[['police', 'fish', 'buffalo'], ['fish', 'police', 'buffalo'], ['fish', 'buffalo', 'police'], ['police', 'buffalo', 'fish'], ['buffalo', 'police', 'fish'], ['buffalo', 'fish', 'police']][['police', 'fish', 'buffalo'], ['fish', 'police', 'buffalo'], ['fish', 'buffalo', 'police'], ['police', 'buffalo', 'fish'], ['buffalo', 'police', 'fish'], ['buffalo', 'fish', 'police']]
The
permutations function
uses a technique called recursion, discussed later in Algorithm Design. The ability to generate
permutations of a set of words is useful for creating data to test a
grammar (Chapter 8). retains only the items for which the
function returns
True.
>>> def is_content_word(word): ... return word.lower() not in ['a', 'of', 'the', 'and', 'will', ',', '.'] >>> sent = ['Take', 'care', 'of', 'the', 'sense', ',', 'and', 'the', ... 'sounds', 'will', 'take', 'care', 'of', 'themselves', '.'] >>> filter(is_content_word, sent) ['Take', 'care', 'sense', 'sounds', 'take', 'care', 'themselves'] >>> [w for w in sent if is_content_word(w)] ['Take', 'care', 'sense', 'sounds', 'take', 'care', 'themselves']
Another higher-order function is
map(), which applies a function to every
item in a sequence. It is a general version of the
extract_property() function we saw earlier in this section. Here is a
simple way to find the average length of a sentence in the news
section of the Brown Corpus, followed by an equivalent version with
list comprehension calculation:
>>> lengths = map(len, nltk.corpus.brown.sents(categories='news')) >>> sum(lengths) / len(lengths) 21.7508111616 >>> lengths = [len(w) for w in nltk.corpus.brown.sents(categories='news'))] >>> sum(lengths) / len(lengths) 21.7508111616
In the previous examples, we specified a user-defined function
is_content_word() and a built-in
function
len(). We can also provide
a lambda expression. Here’s a pair of equivalent examples that count
the number of vowels in each word.
>>> map(lambda w: len(filter(lambda c: c.lower() in "aeiou", w)), sent) [2, 2, 1, 1, 2, 0, 1, 1, 2, 1, 2, 2, 1, 3, 0] >>> [len([c for c in w if c.lower() in "aeiou"]) for w in sent] [2, 2, 1, 1, 2, 0, 1, 1, 2, 1, 2, 2, 1, 3, 0].
>>> def repeat(>> repeat(>> repeat(num=5, msg='Alice') 'AliceAliceAliceAliceAlice' Mapping Words to Properties Using Python Dictionaries.)
>>> def generic(*args, **kwargs): ... print args ... print kwargs ... >>> generic(1, "African swallow", monty="python") (1, 'African swallow') {'monty': 'python'}
When
*args appears as a
function parameter, it actually corresponds to all the unnamed
parameters of the function. As another illustration of this aspect of
Python syntax, consider the
zip()
function, which operates on a variable number of arguments. We’ll use
the variable name
*song to
demonstrate that there’s nothing special about the name
*args.
>>> song = [['four', 'calling', 'birds'], ... ['three', 'French', 'hens'], ... ['two', 'turtle', 'doves']] >>> zip(song[0], song[1], song[2]) [('four', 'three', 'two'), ('calling', 'French', 'turtle'), ('birds', 'hens', 'doves')] >>> zip(*song) [('four', 'three', 'two'), ('calling', 'French', 'turtle'), ('birds', 'hens', 'doves')]
It should be clear from this example that typing
*song is just a convenient shorthand, and
equivalent to typing out
song[0], song[1],
song[2].
Here’s another example of the use of keyword arguments in a function definition, along with three equivalent ways to call the function:
>>> def freq_words(file, min=1, num=10): ... text = open(file).read() ... tokens = nltk.word_tokenize(text) ... freqdist = nltk.FreqDist(t for t in tokens if len(t) >= min) ... return freqdist.keys()[:num] >>> fw = freq_words('ch01.rst', 4, 10) >>> fw = freq_words('ch01.rst', min=4, num=10) >>> fw = freq_words('ch01.rst', num=10, min=4):
>>> def freq_words(file, min=1, num=10, verbose=False): ... freqdist = FreqDist() ... if verbose: print "Opening", file ... text = open(file).read() ... if verbose: print "Read in %d characters" % len(file) ... for word in nltk.word_tokenize(text): ... if len(word) >= min: ... freqdist.inc(word) ... if verbose and freqdist.N() % 100 == 0: print "." ... if verbose: print ... return freqdist.keys()[:num]
Take care not to use a mutable object as the default value of a parameter. A series of calls to the function will use the same object, sometimes with bizarre results, as we will see in the discussion of debugging later. reuse:
>>> nltk.metrics.distance.__file__ '/usr/lib/python2.5/site-packages/nltk/metrics/distance.pyc'-2009 NLTK Project # Author: Edward Loper <edloper@gradient.cis.upenn.edu> # Steven Bird <sb@csse.unimelb.edu.au> # blocks of object-oriented programming,
which falls outside the scope of this book. (Most NLTK modules also
include a
demo() function, which
can be used to see examples of the module in use.) Figure 4-2. misplaced runtime:
>>> def find_words(text, wordlength, result=[]): ... for word in text: ... if len(word) == wordlength: ... result.append(word) ... return result >>> find_words(['omg', 'teh', 'lolcat', 'sitted', 'on', 'teh', 'mat'], 3)
['omg', 'teh', 'teh', 'mat'] >>> find_words(['omg', 'teh', 'lolcat', 'sitted', 'on', 'teh', 'mat'], 2, ['ur'])['omg', 'teh', 'teh', 'mat'] >>> find_words(['omg', 'teh', 'lolcat', 'sitted', 'on', 'teh', 'mat'], 2, ['ur'])
['ur', 'on'] >>> find_words(['omg', 'teh', 'lolcat', 'sitted', 'on', 'teh', 'mat'], 3)['ur', 'on'] >>> find_words(['omg', 'teh', 'lolcat', 'sitted', 'on', 'teh', 'mat'], 3)
['omg', 'teh', 'teh', 'mat', 'omg', 'teh', 'teh', 'mat']['omg', 'teh', 'teh', 'mat', 'omg', 'teh', 'teh', 'mat']
If the program produced an “exception”—a runtime, and she:
>>> import pdb >>> import mymodule >>> pdb.run('mymodule.myfunction()')
, using the smallest possible input. The
second time, we’ll call it with the debugger
.
>>> import pdb >>> find_words(['cat'], 3)
['cat'] >>> pdb.run("find_words(['dog'], 3)")['cat'] >>> pdb.run("find_words(['dog'], 3)")
> <string>(1)<module>() (Pdb) step --Call-- > <stdin>(1)find_words() (Pdb) args text = ['dog'] wordlength = 3 result = ['cat']>
In order to avoid some of the pain of debugging, it helps to
adopt some defensive programming habits. Instead of writing a 20-line
program and Figure 4-3 for an illustration of this process.
Another example is the process of looking up a word in a dictionary. We open the book somewhere around the middle and compare our word with the current page. If it’s earlier in the dictionary, we repeat the process on the first half; if it’s whether any adjacent pairs of elements are identical.
The earlier:
>>> def factorial1(n): ... result = 1 ... for i in range(n): ... result *= (i+1) ... return result:
>>> def factorial2(n): ... if n == 1: ... return 1 ... else: ... return n * factorial2(n-1)():
>>> def size1(s): ... return 1 + sum(size1(child) for child in s.hyponyms())
.
>>> def size2(s): ... layer = [s]
... total = 0 ... while layer: ... total += len(layer)... total = 0 ... while layer: ... total += len(layer)
... layer = [h for c in layer for h in c.hyponyms()]... layer = [h for c in layer for h in c.hyponyms()]
... return total... return total
a new form of the import statement, allowing us to abbreviate the name
wordnet to
wn:
>>> from nltk.corpus import wordnet as wn >>> dog = wn.synset('dog.n.01') >>> size1(dog) 190 >>> size2(dog) 190. Example 4-6 demonstrates the recursive process of building
a trie, using Python dictionaries (Mapping Words to Properties Using Python Dictionaries).).
def insert(trie, key, value): if key: first, rest = key[0], key[1:] if first not in trie: trie[first] = {} insert(trie[first], rest, value) else: trie['value'] = value
>>> trie = nltk.defaultdict(dict) >>> insert(trie, 'chat', 'cat') >>> insert(trie, 'chien', 'dog') >>> insert(trie, 'chair', 'flesh') >>> insert(trie, 'chic', 'stylish') >>> trie = dict(trie) # for nicer printing >>> trie['c']['h']['a']['t']['value'] 'cat' >>> pprint.pprint(trie) {'c': {'h': {'a': {'t': {'value': 'cat'}}, {'i': {'r': {'value': 'flesh'}}}, 'i': {'e': {'n': {'value': 'dog'}}} {'c': {'value': 'stylish'}}}}} Example 4-7 implements a simple text retrieval system for the Movie Reviews Corpus. By indexing the document collection, it provides much faster lookup.
def raw(file): contents = open(file).read() contents = re.sub(r'<.*?>', ' ', contents) contents = re.sub('\s+', ' ', contents) return contents def snippet(doc, term): # buggy text = ' '*30 + raw(doc) + ' '*30 pos = text.index(term) return text[pos-30:pos+30] print "Building Index..." files = nltk.corpus.movie_reviews.abspaths() idx = nltk.Index((w, f) for f in files for w in raw(f).split()) query = '' while query != "quit": query = raw_input("query> ") if query in idx: for doc in idx[query]: print snippet(doc, query) else: print "Not found"
A more subtle example of a space-time trade-off Example 4-8 for an example of how to do this for a tagged corpus.
def preprocess(tagged_corpus): words = set() tags = set() for sent in tagged_corpus: for word, tag in sent: words.add(word) tags.add(tag) wm = dict((w,i) for (i,w) in enumerate(words)) tm = dict((t,i) for (i,t) in enumerate(tags)) return [[(wm[w], tm[t]) for (w,t) in sent] for sent in tagged_corpus]
Another example of a space-time trade-off
that is executed multiple times, and setup code that is executed once
at the beginning. We will simulate a vocabulary of 100,000 items using
a list
or set
of integers. The test statement will
generate a random item that has a 50% chance of being in the
vocabulary
.
>>> from timeit import Timer >>> vocab_size = 100000 >>> setup_list = "import random; vocab = range(%d)" % vocab_size
>>> setup_set = "import random; vocab = set(range(%d))" % vocab_size>>> setup_set = "import random; vocab = set(range(%d))" % vocab_size
>>> statement = "random.randint(0, %d) in vocab" % vocab_size * 2>>> statement = "random.randint(0, %d) in vocab" % vocab_size * 2
>>> print Timer(statement, setup_list).timeit(1000) 2.78092288971 >>> print Timer(statement, setup_set).timeit(1000) 0.0037260055542>>> print Timer(statement, setup_list).timeit(1000) 2.78092288971 >>> print Timer(statement, setup_set).timeit(1000) 0.0037260055542
Performing 1,000 list membership tests takes a total of 2.8 seconds, whereasproblems. Instead of computing solutions to these subproblems Example 4-9.
V4 = LL, LSS i.e. L prefixed to each item of V2 = {L, SS} SSL, SLS, SSSS i.e. S prefixed to each item of V3 = {SL, LS, SSS}
With this observation, we can write a little recursive function
called
virahanka1() to compute
these meters, shown in Example 4-10. Notice that,
in order to compute V4 we
first compute V3 and
V2. But to compute
V3, we need to first
compute V2 and
V1. This call structure is depicted in Example 4-11.
def virahanka1(n): if n == 0: return [""] elif n == 1: return ["S"] else: s = ["S" + prosody for prosody in virahanka1(n-1)] l = ["L" + prosody for prosody in virahanka1(n-2)] return s + l def virahanka2(n): lookup = [[""], ["S"]] for i in range(n-1): s = ["S" + prosody for prosody in lookup[i+1]] l = ["L" + prosody for prosody in lookup[i]] lookup.append(s + l) return lookup[n]1(4) ['SSSS', 'SSL', 'SLS', 'LSS', 'LL'] >>> virahanka2(4) ['SSSS', 'SSL', 'SLS', 'LSS', 'LL'] >>> virahanka3(4) ['SSSS', 'SSL', 'SLS', 'LSS', 'LL'] >>> virahanka4(4) ['SSSS', 'SSL', 'SLS', 'LSS', 'LL']problem Example 4-10. Parsing with Context-Free Grammar.]]) at.,
Python’s assignment and parameter passing use object
references; e.g., if
a is a list
and we assign
b = a, then any
operation on
a will modify
b, and vice versa.
The
is operation tests
whether two objects are identical internal objects, whereas
== tests whether two objects are
equivalent. This distinction parallels the type-token
distinction.
Strings, lists, and tuples are different kinds of sequence
object, supporting common operations such as indexing, slicing,
len(),
sorted(), and membership testing using
in.
We can write text to a file by opening the file for writing
ofile = open('output.txt', 'w'
then adding content to the file
ofile.write("Monty Python"), and finally
closing the file
ofile.close().
A declarative programming style usually produces more compact,
readable code; manually incremented loop variables are usually
unnecessary. When a sequence must be enumerated, use
enumerate().
Functions are an essential programming abstraction: key concepts to understand are parameter passing, variable scope, and docstrings.
A function serves as a namespace: names defined inside a function are not visible outside that function, unless those names are declared to be global.
Modules permit logically related material to be localized in a file. A module serves as a namespace: names defined in a module—such as variables and functions—are not visible to other modules, unless those names are imported.
Dynamic programming is an algorithm design technique used widely in NLP that stores the results of previous computations in order to avoid unnecessary recomputation. learned), and (Knuth, 2006). Useful guidance on the practice of software development is provided in (Hunt & Thomas, 2000) and (McConnell, 2004).
○ Find out more about sequence objects using Python’s help
facility. In the interpreter, type
help(str),
help(list), and
help(tuple). This will give you a full
list of the functions supported by each type. Some functions have
special names flanked with underscores; as the help documentation
shows, each such function corresponds to something more familiar.
For example
x.__getitem__(y) is
just a long-winded way of saying
x[y].
○ Identify three operations that can be performed on both tuples and lists. Identify three list operations that cannot be performed on tuples. Name a context where using a list instead of a tuple generates a Python error.
○ Find out how to create a tuple consisting of a single item. There are at least two ways to do this.
○ Create a list
words = ['is', 'NLP',
'fun', '?']. Use a series of assignment statements (e.g.,
words[1] = words[2]) and a
temporary variable
tmp to
transform this list into the list
['NLP',
'is', 'fun', '!']. Now do the same transformation using
tuple assignment.
○ Read about the built-in comparison function
cmp, by typing
help(cmp). How does it differ in behavior
from the comparison operators?
○ Does the method for creating a sliding window of n-grams
behave correctly for the two limiting cases: n
= 1 and n =
len(sent)?
○ We pointed out that when empty strings and empty lists occur
in the condition part of an
if
clause, they evaluate to
False.
In this case, they are said to be occurring in a Boolean context.
Experiment with different kinds of non-Boolean expressions in
Boolean contexts, and see whether they evaluate as
True or
False.
○ Use the inequality operators to compare strings, e.g.,
'Monty' < 'Python'. What
happens when you do
'Z' < 'a'?
Try pairs of strings that have a common prefix, e.g.,
'Monty' < 'Montague'. Read up on
“lexicographical sort” in order to understand what is going on here.
Try comparing structured objects, e.g.,
('Monty', 1) < ('Monty', 2). Does this
behave as expected?
○ Write code that removes whitespace at the beginning and end of a string, and normalizes whitespace between words to be a single-space character.
Do this task using
split() and
join().
Do this task using regular expression substitutions.
○ Write a program to sort words by length. Define a helper
function
cmp_len which uses the
cmp comparison function on word
lengths.
Create a list of words and store it in a variable
sent1. Now assign
sent2 = sent1. Modify one of the items in
sent1 and verify that
sent2 has changed.
Now try the same exercise, but instead assign
sent2 = sent1[:]. Modify
sent1 again and see what happens to
sent2. Explain.
Now define
text1 to be
a list of lists of strings (e.g., to represent a text consisting
of multiple sentences). Now assign
text2 = text1[:], assign a new value
to one of the words, e.g.,
text1[1][1]
= 'Monty'. Check what this did to
text2. Explain.
Load Python’s
deepcopy() function (i.e.,
from copy import deepcopy), consult
its documentation, and test that it makes a fresh copy of any
object..
Write code to initialize a two-dimensional array of sets
called
word_vowels and process a
list of words, adding each word to
word_vowels[l][v] where
l is the length of the word and
v is the number of vowels it
contains.
Write a function
novel10(text) that prints any word that
appeared in the last 10% of a text that had not been encountered
earlier.
Write a program that takes a sentence expressed as a single
string, splits it, and counts up the words. Get it to print out each
word and the word’s frequency, one per line, in alphabetical
order.:
>>> letter_vals = {'a':1, 'b':2, 'c':3, 'd':4, 'e':5, 'f':80, 'g':3, 'h':8, ... 'i':10, 'j':10, 'k':20, 'l':30, 'm':40, 'n':50, 'o':70, 'p':80, 'q':100, ... 'r':200, 's':300, 't':400, 'u':6, 'v':6, 'w':800, 'x':60, 'y':10, 'z':7}.
Write a function
shorten(text,
n) to process a text, omitting the n
most frequently occurring words of the text. How readable is
it?
Write code to print out an index for a lexicon, allowing
someone to look up words according to their meanings (or their
pronunciations; whatever properties are contained in the lexical
entries).
path_distance() from
right_whale.n.01..
Write a function that takes a text and a vocabulary as its
arguments and returns the set of words that appear in the text but
not in the vocabulary. Both arguments can be represented as lists of
strings. Can you do this in a single line, using
set.difference()?')).
Read up on “keyword linkage” (Chapter 5 of (Scott &
Tribble, 2006)). Extract keywords from NLTK’s Shakespeare Corpus and
using the NetworkX package, plot keyword linkage networks.
Read about string edit distance and the Levenshtein
Algorithm. Try the implementation provided in
nltk.edit_dist(). In what way is this
using dynamic programming? Does it use the bottom-up or top-down
approach? (See also.)
The Catalan numbers arise in many applications of
combinatorial mathematics, including the counting of parse trees
(Grammar Development). The series can be
defined as follows: C0
= 1, and
Cn+1
=
Σ0..n
(CiCn-i).
Write a recursive function to compute nth Catalan number Cn.
Now write another function that does this computation using dynamic programming.
Use the
timeit module
to compare the performance of these functions as
n increases.
● Reproduce some of the results of (Zhao & Zobel, 2007) concerning authorship identification.
● Study gender-specific lexical choice, and see if you can reproduce some of the results of.
● Write a recursive function that pretty prints a trie in alphabetically sorted order, for example:
chair: 'flesh' ---t: 'cat' --ic: 'stylish' ---en: 'dog'
● With the help of the trie data structure, write a recursive function that processes text, locating the uniqueness point in each word, and discarding the remainder of each word. How much compression does this give? How readable is the resulting text?
●.
●.
● Read the following article on semantic orientation of adjectives. Use the NetworkX package to visualize a network of adjectives with edges to indicate same versus different semantic orientation (see).
● Design an algorithm to find the “statistically improbable phrases” of a document collection (see).
● Write a program to implement a brute-force algorithm for discovering word squares, a kind of n × n: crossword in which the entry in the nth row is the same as the entry in the nth column. For discussion, see.
No credit card required | https://www.oreilly.com/library/view/natural-language-processing/9780596803346/ch04.html | CC-MAIN-2019-30 | refinedweb | 6,395 | 57.16 |
Revision history for Class-Sniff 0.10 2014-06-07 - Tidied up the documentation: formatting tweaks, typos, etc. - Fixed RT#72158 (see 0.09_01 below) - Fixed RT#53423 (see 0.09_01 below) - Min perl version set to 5.006 0.09_01 2014-06-04 - 0.09 2011-09-11 - Allow multiple paths to @INC (Bruno Vecchi) - Searching for classes in more than one directory (Bruno Vecchi) - Provide --output argument for csniff utility (Bruno Vecchi) 0.08_05 2009-05-23 - Perl 5.010000 and greater now make circular inheritance fatal at compile time, so let's skip that check for these Perls. 0.08_04 2009-05-21 - Remove test dependency on Sub::Information. Oops :) 0.08_03 2009-05-20 - Add -I switch for csniff utility. - Removed dependency on Sub::Information. That has a dependency on Data::Dump::Streamer and that module fails its tests for non-US locales. 0.08_02 2009-03-19 - Add C<csniff> command-line utility. - Add 'clean' option to constructor to avoid tracking pseudo-packages. - Added C<graph_from_namespace> as everyone seems to want this. 0.08_01 unreleased - Added experimental code to detect "fake" packages. Ideas offered by Graham Barr, but abused by me. They're not his fault! - OUCH! Added the code smell and regression tests to the MANIFEST. Would no wonder all tests are passing on the CPAN :) - Clarified that "long methods" may not really be a code smell at all. Doc changes don't really need to be here, but this is important enough to mention it. - new_from_namespace now can accept a regex, too. 0.08 2009-02-15 - Added 'new_from_namespace' method. 0.07 2009-02-15 - combine_graphs method added. Now it's trivial to see inheritance hierarchies. - Allow an instance of an object to be passed to the constructor, not just a class name. - Removed the 'tree' representation. Code is much easier to read as a result. 0.06 2009-02-03 - Experimental 'method length' support. - Circular paths are now a fatal error. 0.05 2009-02-02 - Added experimental support for tracking duplicate methods. - Started documentation reorganization. 0.04 2009-02-02 - Added 'exported' to detect exported 'methods'. - Added "report" method to create a simple, human-readable report. - Added "build_path" fix from Aristotle. 0.03 2009-02-02 - Added 'multiple_inheritance' method. - Added support for including the "UNIVERSAL" class. 0.02 2009-02-01 - Added documentation. - Made the 'unreachable' return more sane. 0.01 2009-02-01 - First version, released on an unsuspecting world. | https://metacpan.org/changes/distribution/Class-Sniff | CC-MAIN-2016-22 | refinedweb | 410 | 62.95 |
Recently Browsing 0 members
No registered users viewing this page.
Similar Content
- By Jake_s
Hi All,
I am a begginer in auto it. I am trying to build simple gui that will show me if for example notepad.exe process is currently running on the system. I have built something like this, but when I execute it shows me message boxes, but I want the results to show in gui. If you can help me start with this.
Thanks
#include <MsgBoxConstants.au3>
#include <GUIConstantsEx.au3>
ActiveProcess()
Func ActiveProcess()
GUICreate("Act")
ProcessExists("wuauclt.exe")
If ProcessExists("wuauclt.exe") Then
MsgBox($MB_SYSTEMMODAL, "", "Windows Upates are running")
Else
MsgBox($MB_SYSTEMMODAL, "", "Windows Updates are not running")
EndIf
;Notepad
ProcessExists("notepad.exe")
If ProcessExists("notepad.exe") Then
MsgBox($MB_SYSTEMMODAL, "", "Notepad is running")
Else
MsgBox($MB_SYSTEMMODAL, "", "Notepad is not running")
EndIf
- By therks
Is there any reason that ProcessExists would start returning false on a process that still .. exists?
I'm having an issue that I'm so far unable to reproduce reliably, so this is more a general question/advice thread. I have a rather elaborate script running and interacting with a server application. Because the server can crash, one of the purposes of my script is to relaunch the server if it stops.
I'm accomplishing this by storing the PID whenever I Run() the server, and I have an if statement with ProcessExists() in a loop to relaunch. This is a snippet:
While 1 If Not ProcessExists($I_PID) Then _LogWrite('Process lost: ' & $I_PID) If TimerDiff($iRelaunchTimer) < 5000 Then $iRelaunchCount += 1 Else $iRelaunchCount = 1 EndIf If $iRelaunchCount <= 5 Then _LogWrite('Relaunching... (attempt ' & $iRelaunchCount & ')') _Launch() Else Local $sRelaunchExceeded = 'Relaunch looped ' & $iRelaunchCount & ' times in ' & Round(TimerDiff($iRelaunchTimer)/1000) & ' seconds. Check that server is not already running. Exiting.' _LogWrite($sRelaunchExceeded) MsgBox(0x10, $APP_NAME, $sRelaunchExceeded) ExitLoop EndIf $iRelaunchTimer = TimerInit() Else ; Do a bunch of other stuff EndIf WEnd Func _Launch() Global $I_PID = Run($SERVER_CMD, $SERVER_DIR, @SW_HIDE, $STDERR_MERGED) If @error Then MsgBox(0x10, $APP_NAME, 'Error running command:' & @LF & $SERVER_CMD & @LF & @LF & 'In directory:' & @LF & $SERVER_DIR) ; OK: 1 Exit 600 EndIf _LogWrite('Server launched (PID:' & $I_PID & ').') IniWrite($INI_FILE, 'Config', 'PID', $I_PID) EndFunc That's not really runnable, but you get the general idea. As I suggested above, the issue I'm experiencing is that sometimes ProcessExists returns false even though the process does still exist (getting the PID from the log, and checking task manager I can see it's still running with the same PID), and the server won't relaunch if it's already running. And the major problem I'm having with diagnosing this is that it happens completely intermittently. It could go for days just fine, or only hours (it's never quick though of course). The server runs on our media computer all the time and the computer and server sometimes go for a few days without being checked on, but ideally we'd like it running all the time.
Anyway, I'm stumped, so any advice on offer will be gratefully accepted.
- By skyhigh
I wrote a script based on a loop. I want my script to check at the start of every cycle if one or more processes are still running and responding, then react if they are no more. I can do the former using ProcessExist, but how about the latter?
Does exist a function that verifies if a process is still responding?
Thanks in advance
- By u01jmg3
-
Recommended Posts
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now | https://www.autoitscript.com/forum/topic/173944-help-whit-processexists/?tab=comments | CC-MAIN-2021-49 | refinedweb | 606 | 55.24 |
This is a multi-part series – If you just hit this page, please check out the prior posts first!
Alright – We’re finally at the last step to this project. We already have Django running with a somewhat decent-looking web dashboard. Now the only thing that remains is polling our SRX firewalls for VPN connections, then populating this information into the database.
For this to happen, I built a custom management command for Django – the cool thing here is that we can easily use this within a cron job to automatically schedule updates at regular intervals. So within our application (the vpn folder), we’re going to create a management folder then a commands folder within that. In here you can name your script whatever you want, so I went with cronpoller.py.
The script that I wrote is extremely simplistic and could probably use quite a number of improvements – which I will slowly make over time. But for now, it is what it is – and it does exactly what I need it to.
First thing we need to do is import a few things we will need:
from django.core.management.base import BaseCommand, CommandError from vpn.models import Firewall, Datacenter from jnpr.junos import Device from lxml import etree
We have some Django modules for treating this as a management command – and we’re also importing our existing models into this script. This part is really cool to me, because it means that this cron script can just reference the existing objects that we created to get their attributes. Next, we import the JunOS stuff from their pyEZ packages (don’t forget to install pyEZ!). The last one is going to be used to parse the responses from our SRX into something we can use.
Alright – so in order to build a management command, we have to create a class called Command, then define our functions within that. Within the Command class, Django will look for a function called handle to execute. In my final script, I have that function plus one called getVPNStatus. Let’s start with putting together getVPNStatus:
# This function will poll the device for status def getVPNStatus(self, fw, dc): connectedlist = [] # So we generate a JunOS pyEZ device connection using the information about the # firewall that we gather from the database object device = Device(fw.firewall_manageip, user=fw.firewall_user, password=fw.firewall_pass) # Try to open a connection out to the target device try: dev.open() except: # If for some reason this doesn't work, just return UNREACHABLE - which # we'll assume means the device is down return "UNREACHABLE" # Here is where we poll the SRX for a list of all IPSec Security Associations. # The equivalent of the 'show security ipsec sa' command response = etree.tostring(dev.rpc.get_security_associations_information()) # The SRX returns a response in XML, which we'll need to dig through # Credit to the guys over at Packet Pushers for a great post explaining how to # parse these responses with open(response) as a: xmldoc = etree.parse(a) docroot = xmldoc.getroot() rootchildren = docroot.iter() for child in rootchildren: # For each IPSec SA returned, we need to find the remote gateway IP, which # we use to tie the connection back to the connected datacenter if child.tag == "sa-remote-gateway": connectedlist.append(child.text) # Once we've built our list, send it back! return connectedlist
That already is major part of our script. It will connect out to each device, grab a list of every connected IPSec tunnel, then return back a list of connected gateways. Now we just need to write our handle function, which will do all of the remaining work.
def handle(self, *args, **options): # Let's quickly create two lists - based on our firewall and datacenter models # This actually polls the database for anything that's been created dclist = Datacenter.objects.order_by('datacenter_code') fwlist = Firewall.objects.order_by('firewall_name') # This will loop through each firewall, then call the getVPNStatus function for fw in Firewall.objects.order_by('firewall_name'): statuslist = self.getVPNStatus(fw, dclist) # Once we have our response, we're going to check the returned list of connected # gateways against our list of datacenters from the database for dc in dclist: for remoteFW in fwlist: # If we find another datacenter in the connected gateway list, add # that datacenter to the vpnstatus list as "VPNUP", otherwise assume it's down if remoteFW.firewall_vpnip in statuslist: vpnstatus[dc.datacenter_code] = "VPNUP" break else: vpnstatus[dc.datacenter_code] = "VPNDOWN" # After all that is done - we just save the new vpnstatus list to the firewall_vpnstatus # field in the database fw.firewall_vpnstatus = vpnstatus fw.save(update_fields=["firewall_vpnstatus"])
I’ve said this before, but I think it’s worth noting again – This probably isn’t the best or most efficient/reliable way of doing this. I think that over time I would like to revisit this and refine it a bit – but this is what I’ve put together so far. In fact, I think it’s worth stating that this will probably break in a fantastically horrific manner. This isn’t a finished product, but pretty much just version 0.1 – the base functionality works, but it’s in desperate need of refinement.
Alright – so now we just have to add a cron job in our system to call this script and run it. Depending on how many firewalls you have and how the latency is, you might need a different interval than I am using – but I went ahead and set mine to run once every 10 minutes. I would like to get this down to 5 minutes or less, but I might need to figure out a way to potentially multi-thread the cronpoller.py script. So here is what I did in /etc/cron.d/vpnstatuspoller:
*/10 * * * * * python /root/junos-dashboard/manage.py cronpoller
After letting the script poll my firewalls – I ended up with a dashboard that looks like this:
Future improvements
Of course, I still have a few things I would like to add or improve on. I’ve been keeping a list of some ideas that I’ve had, which I’ll share here:
- Add a timestamp to each firewall that contains the last poll time (in case something gets missed or hasn’t updated yet)
- Possibly add ability to click a ‘down’ VPN cell to force clear SA
- Set up email alerts from the cronpoller to automatically notify me of a failure
- Ability to click on any VPN cell and get additional info – like VPN uptime, subnets routed across it, or maybe the IKE/IPSec negotiation parameters
- Ability to click on any firewall name and get additional info – like uptime, JunOS version, CPU/Memory
I finally got around to getting myself a GitHub account, so I might put the final code up there once I’m done. I also have a number of other JunOS scripts that are probably worth posting up there as well.
Well, I hope you enjoyed reading about this project – because I certainly enjoyed working on it. This was one of my first real projects using the JunOS pyEZ libraries, and I got to pick up and learn how to use Django as well. The experience of building this has given me ideas for other JunOS automation projects – so look out for some of those in the future! | https://0x2142.com/?p=399 | CC-MAIN-2019-30 | refinedweb | 1,221 | 59.84 |
#include <db_cxx.h> int DbSite::close();
The
DbSite::close() method
deallocates the DbSite handle. The handle must not be accessed
again after this method is called, regardless of the return value.
Use of this method does not in any way affect the configuration of the site to which the handle refers, or of the replication group in general.
All DbSite handles must be closed before the owning DbEnv handle is closed.
The
DbSite::close()
method either returns a non-zero error value or throws an
exception that encapsulates a non-zero error value on
failure, and returns 0 on success.
The
DbSite::close()
method may fail and throw a DbException
exception, encapsulating one of the following non-zero errors, or return one
of the following non-zero errors:
Replication and Related Methods | http://idlebox.net/2011/apidocs/db-5.2.28.zip/api_reference/CXX/dbsite_close.html | CC-MAIN-2013-48 | refinedweb | 132 | 50.67 |
Suppose that you have a interface in your project
1: namespace Configuration.Logger 2: { 3: public interface ILogger 4: { 5: void Write(); 6: } 7: }
and a few his implementations.
1: public class FileLogger : ILogger 2: { 3: public void Write() 4: { 5: Console.WriteLine("Writing to file"); 6: } 7: } 8: 9: public class DatabaseLogger : ILogger 10: { 11: public void Write() 12: { 13: Console.WriteLine("Writing to the database"); 14: } 15: }
Depending on the situation, you want use first of them, another time the second. But how (and where) change it? The simplest possibility is to change it in the source code, but this is associated with recompilation of the entire project. So what else we can do?
The .NET Framework comes to us with a System.Configuration namespace which is responsible for working with the configuration files (Machine.config, Web.config, App.config). Now I think, the answer is simple – create and work with own config file. Thanks to him we can change the concrete implementation of the interface without recompilation the entire project.
So let’s create one.
1: <?xml version="1.0" encoding="utf-8" ?> 2: <configuration> 3: <configSections> 4: <section name="loggerMappingsConfiguration" 5: 7: </configSections> 8: <loggerMappingsConfiguration> 9: <loggerMappings> 10: <loggerMapping interfaceShortTypeName="ILogger" 11: 13: </loggerMappings> 14: </loggerMappingsConfiguration> 15: </configuration>
This is a simple App.config with one section only. As I mentioned earlier, System.Configuration provides classes to work with it. What we have to do is inherit from them and create a new for this example. We need to create a three classes derived from :
The forth class this will be a helper class contains a string values for configuration element. Let’s start from her.
1: internal static class LoggerMappingConstants 2: { 3: public const string ConfigurationPropertyName = "loggerMappings"; 4: public const string ConfigurationElementName = "loggerMapping"; 5: public const string InterfaceShortTypeNameAttributeName = "interfaceShortTypeName"; 6: public const string ConcreteFullTypeNameAttributeName = "concreteFullTypeName"; 7: public const string LoggerMappingsConfigurationSectionName = "loggerMappingsConfiguration"; 8: }
These values are used in other classes several times, so create a static class with them is a good idea. Class derived from ConfigurationElement is show below
1: public sealed class LoggerMappingElement : ConfigurationElement 2: { 3: [ConfigurationProperty(LoggerMappingConstants.InterfaceShortTypeNameAttributeName, 4: IsKey = true, IsRequired = true)] 5: public string InterfaceShortTypeName 6: { 7: get 8: { 9: return (string)this[ 10: LoggerMappingConstants.InterfaceShortTypeNameAttributeName]; 11: } 12: set 13: { 14: this[LoggerMappingConstants.InterfaceShortTypeNameAttributeName] = value; 15: } 16: } 17: 18: [ConfigurationProperty(LoggerMappingConstants.ConcreteFullTypeNameAttributeName, 19: IsRequired = true)] 20: public string ConcreteFullTypeName 21: { 22: get 23: { 24: return (string)this[LoggerMappingConstants.ConcreteFullTypeNameAttributeName]; 25: } 26: set 27: { 28: this[LoggerMappingConstants.ConcreteFullTypeNameAttributeName] = value; 29: } 30: } 31: }
Contain a two methods mapping to the appropriate properties from config file. Class derived from ConfigurationElementCollection
1: public sealed class LoggerMappingCollection : ConfigurationElementCollection 2: { 3: 4: protected override ConfigurationElement CreateNewElement() 5: { 6: return new LoggerMappingElement(); 7: } 8: 9: protected override object GetElementKey(ConfigurationElement element) 10: { 11: return ((LoggerMappingElement)element).InterfaceShortTypeName; 12: } 13: 14: public override ConfigurationElementCollectionType CollectionType 15: { 16: get 17: { 18: return ConfigurationElementCollectionType.BasicMap; 19: } 20: } 21: 22: protected override string ElementName 23: { 24: get 25: { 26: return LoggerMappingConstants.ConfigurationElementName; 27: } 28: } 29: 30: public new LoggerMappingElement this[string interfaceShortTypeName] 31: { 32: get 33: { 34: return 35: (LoggerMappingElement)this.BaseGet(interfaceShortTypeName); 36: } 37: } 38: } 39:
And the last class:
1: public class LoggerSettings : ConfigurationSection 2: { 3: [ConfigurationProperty(LoggerMappingConstants.ConfigurationPropertyName, 4: IsDefaultCollection = true)] 5: public LoggerMappingCollection LoggerMappings 6: { 7: get 8: { 9: return 10: (LoggerMappingCollection)base[LoggerMappingConstants.ConfigurationPropertyName]; 11: } 12: } 13: }
To test these classes, we can build a factory that will create objects depending on the settings in the configuration file
1: public static class LoggerFactory 2: { 3: public static ILogger CreateLogger() 4: { 5: ILogger logger; 6: string interfaceShortName = typeof(ILogger).Name; 7: 8: LoggerSettings settings = (LoggerSettings)ConfigurationManager.GetSection( 9: LoggerMappingConstants.LoggerMappingsConfigurationSectionName); 10: 11: logger = Activator.CreateInstance( 12: Type.GetType(settings.LoggerMappings[interfaceShortName] 13: .ConcreteFullTypeName)) as ILogger; 14: 15: return logger; 16: } 17: }
And finally check the result
1: class Program 2: { 3: static void Main(string[] args) 4: { 5: ILogger logger = new DatabaseLogger(); 6: ILogger loggerFromFactory = LoggerFactory.CreateLogger(); 7: 8: Debug.Assert(loggerFromFactory.GetType() == logger.GetType()); 9: } 10: }
Source code is available here (click the blue button called "Pobierz plik").
Wrzuta.pl, this is a polish site similar to youtube. It allows to watch movies and listen to music, but does not allow direct downloading them to disk. In this post I would like to show how to achieve that using Ext JS and MVC.
Create a new ASP.NET MVC project and add there a Ext JS library. I have written here how to do that.
First step is create a form where user can paste the url.
This is a simple form with only one TextField, where user can insert a specified link and one button for submitting the form to the server. This field has a custom validation (1):
This validation check if the inserted link corresponds to the pattern of url for wrzuta.pl.
About custom validation I have written here.
Second step is create a controller with a Download action where the inserted link is converted to the absolute path to the file for download
ASP.NET MVC provides a RedirectResult, but Ext needs a response in JSON format, so the ContentResult is used and the redirect to the link is placed in the form handler:
Full source code is available here (click the blue button called "Pobierz plik").
This is a part of EXT JS Tutorial
In this article I will cover information about creating, submitting and validating a forms with Ext JS library.
Submitting form to the server will be based on ASP.NET MVC page, so create a new MVC project in Visual Studio and add a EXT JS library into it. How to do it I have written here. So let’s create our first form.
How to create a simple form – first example.
var form = new Ext.form.FormPanel({ //(1)
renderTo: Ext.getBody(), //(2)
url: 'Home/SubmitForm',
frame: true,
title: 'Tell me sth about yourself',
width: 200,
items: [{ //(3)
xtype: 'textfield',
fieldLabel: 'Your name',
name: 'name'
}, {
xtype: 'numberfield',
fieldLabel: 'Age',
name: 'age'
}]
});
Listing above shows:
(1) FormPanel is used to group fields in a single panel. It corresponds to a <form> tag in XHTML. FormPanel takes one argument which is a config object.
(2) In the config object you can specify the form properties, one by line in this example, but this is not required. You can write all in one line remembers only to separate them with a colon (,). Placing them one after another may improve readability.
(3) All fields, like html textbox, combobox and other in FormPanel are definied as elements of the items collection. One record for one field. In EXT JS you can create these fields in two ways. Above is presented first. The second looks like:
var nameTextField = new Ext.form.TextField({
fieldLabel: 'Your name',
name: 'name'
});
var ageNumberField = new Ext.form.NumberField({
fieldLabel: 'Age',
name: 'age'
var form = new Ext.form.FormPanel({ //(1)
items: [ //(3)
nameTextField,
ageNumberField
]
In both situations the result looks the same:
{
xtype: 'radio',
fieldLabel: 'Sex',
name: 'sex',
boxLabel: 'male'
hideLabel: false,
boxLabel: 'female'
}
Important is here to set the name to the same value. Thanks to it, it is possible to check the value in action method on the server.
-checkbox
xtype: 'checkbox',
name: 'siblings',
fieldLabel: 'Siblings'
-combobox
To combobox field are added pairs, with key and the value. The key can be read after submitting the form in the server side code, value this is what user can see in the browser when he selected one from the available options.
EXT provides special store objects, where data can be kept. This feature will be covered in one of the next parts.
At this point look only for below example, which is used to work with combobox field:
var answers = new Ext.data.SimpleStore({
fields: ['id', 'answer'],
data: [['1', 'yes'], ['2', 'no']]
and the combo:
{
xtype: 'combo',
store: answers,
mode: 'local',
fieldLabel: 'available answers',
name: 'answer',
anchor: '90%',
displayField: 'answer',
valueField: 'id'
As you can see, in the combo to the store config option is link the SimpleStore object with handle the data. To the displayField and valueField are assigned fields from used store.
xtype: 'datefield',
name: 'date',
fieldLabel: 'Select date',
disabledDays: [0,6]
-timefield
xtype: 'timefield',
name: 'time',
fieldLabel: 'Time',
anchor: '90%'
-textarea
xtype: 'textarea',
name: 'area',
fieldLabel: 'Enter some text',
multiline: true
Listeners
For each field in documentation you can find a lot of events. This is very easy to listen to them in EXT JS. Look at example of using a invalid event:
var nameTextField = new Ext.form.TextField({
name: 'name',
allowBlank: false,
listeners: {
invalid: function(field, msg)
{
Ext.Msg.alert('', msg);
}
In this code, when you leave blank the Name field, the error message you will see in the alert dialog instead of baloon.
Submitting form to the server
In previous paragraph I have showed how to create a form to take data from user. Now I would like to concentrate how to send and use these data in a server side code. To be able to send the data to the server, form must have a button to do that. FormPanel contain similar to the items collection, collection of buttons. Look at this code
buttons: [{
text: 'Save',
handler: function()
form.getForm().submit({
success: function(a, b)
{
Ext.Msg.alert('Success', 'ok');
},
failure: function(a, b)
Ext.Msg.alert('Failure', b.result.error);
}
});
}, {
text: 'Reset',
form.getForm().reset();
}
}]
In code below there are created two buttons, one for submit the form and one for reset it. Both have a handler methods. In handler method you can specify, what will be done when the button is pressed. In the Save button there is called the submit method and checked the result .
Before we can submit the form, we must create a action method in the controller class:
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult SubmitForm(FormCollection collection)
{
string response = String.Empty;
//validation here
try
{
string name = collection.GetValue("name").AttemptedValue;
int age = Convert.ToInt32(Request.Form["age"]);
response = "{success: true}";
}
catch (Exception)
response = "{success: false, error: \”an error occurred\”}";
return Content(response);
}
In this code, in action method the submitted data should be validated if it hasn’t been done in the client code. The next two paragraphs are devoted to client side validation.
Developer can determine in the submit method, if the result is successful or not. Both, success and failure takes two arguments, which determine: the first is the form that requested this action, the second: the response from the server.
Build-in validation
At this point when you click the Save button, everything seems to work fine. However in the fields for name and age you can almost insert whatever you want. That’s true, in the numberfield you can insert only numbers, but these numbers can be negative too. Negative age it isn’t something what we want.
As I mentioned earlier, you can validate these fields on the server, but better way would seem to be validate it from the client side. EXT JS provides the stuff to do it. Suppose that, we would like to have the possibility to insert into the Name field only a alphabetic characters and this field must have a value (can’t be empty). For validation we can use a build-in vtype types. We can distinguish the basics: url, email, alpha and alphanum. These names are so obvious that I don't need to describe them, I think . Ok, it’s time to show how to use them
var nameTextField = new Ext.form.TextField({
vtype: 'alpha',
allowBlank: false
As you can see I added two new lines. Thanks to the first line you can not add anything that is not a sign from the alphabet now. Second defines that, the field can’t be empty. When you try to submit the form without the value in the Name field, you will see an error
The field in underlined and nothing else. You can customize it by adding a new line
Ext.QuickTips.init();
just under the Ext.onReady function. Using it, when you mouse over the field, you will see a balloon with the message showing what went wrong
To validate an Age field you can use config options, like maxValue, allowNegative, allowDecimals, allowBlank and minValue.
var ageNumberField = new Ext.form.NumberField({
name: 'age',
maxValue: 100,
allowNegative: false,
allowDecimals: false,
minValue: 10
});
Custom validation
Using EXT JS you can create your own, custom validation. The build-in vtypes are very limited. For example in the field where user must write his name, there are available only characters from ASCII, or in other words only characters which appear in English alphabet. In this situation I’m not able to put there my name, which contains polish specified characters. Everyone with different locale that English will be have this problem.
To create a custom vtype we need to add it to the vtype definition. Lets see an example. This code allows to use a polish letters in the TextField, additionally the first letter for surname and first for firstname will be capitalized.
Ext.form.VTypes['ValidNameVal'] = //(1)
/^[A-ZŁŻ][A-ZŻŁŚŹa-ząłęóźżńćś\-]+ [A-ZŁŻ][A-ZŻŁŚŹa-ząłęóźżńćś\-]+$/;
Ext.form.VTypes['ValidNameMask'] = /[A-ZŻŁŚŹa-ząłęóźżńćś\- ]/; //(2)
Ext.form.VTypes['ValidNameText'] = 'Invalid name'; //(3)
Ext.form.VTypes['ValidName'] = function(arg) //(4)
{
return Ext.form.VTypes['ValidNameVal'].test(arg);
}
At first glance it may look horrible. Especially because of the used regular expression (regex) if you are not familiar with this syntax. Each vtype definition has a:
(1) Value – here is the regex to match to the user input
(2) Mask – here are specified characters, which user can input in the field,
(3) Text – this is the message with the error description
(4) The last require element – the function for testing
Below example shows how it look like in the web browser. To use this custom vtype, you must change the definition for TextField from
var nameTextField = new Ext.form.TextField({
fieldLabel: 'Your name',
name: 'name',
vtype: ‘alpha’,
allowBlank: false
});
to
var nameTextField = new Ext.form.TextField({
vtype: 'ValidName',
Result
As you can see, with polish letters I have no problem any more. If error occur (first letter for my surname is not capitalize) , validation works properly:
Summary
In this part I have covered how to work with forms using the EXT JS library and ASP.NET MVC. I’ve showed how to create, validate and submit them to the server. And at the end the possibility of creating own custom validation. That’s not all what is related to forms in this library. In next parts there will be explained how to create and use them with the layouts, further will be showed how to work with data stores and other.
The source code with above examples is available here (click the blue button called "Pobierz plik").
Key
Argument
ALT
BREAK
{BREAK}
CAPS LOCK
{CAPSLOCK}
DEL or DELETE
{DELETE} or {DEL}
DOWN ARROW
{DOWN}
END
{END}
ENTER
{ENTER} or ~
ESC
{ESC}
INS or INSERT
{INSERT} or {INS}
LEFT ARROW
{LEFT}
NUM LOCK
{NUMLOCK}
PAGE DOWN
{PGDN}
PAGE UP
{PGUP}
PRINT SCREEN
{PRTSC}
RIGHT ARROW
{RIGHT}
SCROLL LOCK
{SCROLLLOCK}
TAB
{TAB}
UP ARROW
{UP}
F1 throught F16
{F1}through {F16}
Ext.onReady(function()
var form = new Ext.FormPanel({
renderTo: 'fi-form', //(5)
fileUpload: true, //(1)
width: 400,
frame: true,
title: 'File Upload',
bodyStyle: 'padding: 10px 10px 0 10px;',
labelWidth: 50,
items: [{
xtype: 'fileuploadfield',
emptyText: 'Select an image',
fieldLabel: 'Image',
name: 'file', //(2)
buttonText: 'Choose a file'
}],
buttons: [{
text: 'Save',
handler: function()
if (form.getForm().isValid())
form.getForm().submit({
url: 'Home/Upload',
waitMsg: 'Uploading your photo...',
success: function(form, o) //(3)
{
Ext.Msg.show({
title: 'Result',
msg: o.result.result,
buttons: Ext.Msg.OK,
icon: Ext.Msg.INFO
});
},
failure: function(form, o) //(4)
msg: o.result.error,
icon: Ext.Msg.ERROR
}
});
}]
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Upload()
{
string[] supportedTypes = new string[]{
"png", "gif", "tiff", "bmp", "jpg", "jpeg" //(1)
};
HttpPostedFileBase postedFile = Request.Files["file"];
if (postedFile != null) //(2)
string x = Path.GetExtension(postedFile.FileName);
if (supportedTypes.Contains(x.TrimStart('.'))) //(3)
//to do sth with the file
return Content("{success:true, result:\"File uploaded correctly\"}"); //(4)
else //(5)
//unsupported file type
return Content("{success:false, error:\"Unsupported file type\"}");
}
return new JsonResult() //(6)
ContentType = "text/html",
Data = new { success = false, error = "File uploaded error" } | http://geekswithblogs.net/lszk/archive/2009/09.aspx | crawl-003 | refinedweb | 2,710 | 56.25 |
Just like in .NET applications, you might want to have global configuration settings in your Angular applications that you can access from any component or service class. There are many approaches you can take for retrieving these global settings; I’m going to use a service that can be injected into any class. I think the flexibility of using a service is an ideal method for providing application-wide settings to any class that needs them.
This article illustrates how to create an Angular service to read and modify configuration settings. I’m going to use Visual Studio Code and Visual Studio 2017, along with C# and Angular 4. You’re going to learn to create a class to hold configuration settings. These settings can be retrieved from a JSON file or from a Web API call. Once you’ve retrieved the settings, you can store them into local storage. Placing them into local storage allows your user to modify those settings.
Create a New Angular Project
If you’re new to Angular, you are going to need the following tools installed on your computer.
- Node.js
- TypeScript
- Angular
- Angular CLI
- An editor, such as Visual Studio or Visual Studio Code
In this article, I’m going to use Visual Studio Code and the Angular CLI to create the sample project. I’ll assume that you have the tools listed above installed on your computer. If you haven’t already installed them, please do so now, so that you can follow along with this article.
Open a command prompt on your computer and navigate to a folder where you typically create your development projects. For example, on my computer, I usually go to my D drive and go to a folder named \Samples. So, from the command prompt I enter the following commands.
D: cd \Samples
Using the Angular CLI, I create a new project under the Samples folder using the following command.
ng new ConfigSample
Executing the ng new command creates a new folder named \LogSample. The new command creates the files needed to develop an Angular application. While you are still in the command prompt, navigate to the new folder using the following:
cd ConfigSample
Start Visual Studio Code by typing code followed by a space and a period. This tells Code to start and open the current folder.
code .
When you are in Visual Studio Code, click on the View menu and choose the Integrated Terminal… menu option. A small window opens at the bottom of Visual Studio Code. Into this command window, type the following command to start up the lite-server.
npm start
The lite-server is a small server that runs on your local computer on port 4200 by default. Start your browser of choice and navigate to the and you should see a Web page that looks like Figure 1.
Create the Application Settings Service Classes
You’re going to create a class to be used across many other component and service classes in your Angular application. Place this new class into a \shared folder located under \src\app. Create the \shared folder now.
Right mouse-click on the \shared folder and add a new file named appsettings.ts. Create a class in this new file with properties to hold the values you wish to use in your Angular application. For example, if you have a Product form that’s used to add new products to a database, you might want to provide some default values for a couple of the fields. The following code includes a class named AppSettings with two properties; defaultUrl and defaultPrice.
export class AppSettings { defaultUrl: string = "" defaultPrice: number = 1 }
An AppSettingsService Class
It’s now time to create an Angular service class to return an instance of the AppSettings class. Add a new TypeScript file under the \shared folder named appsettings.service.ts. Add the code shown in the following code snippet:
import { Injectable } from '@angular/core'; import { Observable } from 'rxjs/Observable'; import 'rxjs/add/observable/of'; import { AppSettings } from "./appsettings"; @Injectable() export class AppSettingsService { getSettings(): Observable<AppSettings> { let settings = new AppSettings(); return Observable.of<AppSettings>(settings); } }
The preceding code is a standard representation of an Angular service class. In the getSettings() method, create a new instance of the AppSettings class and return that object from this service. The main reason you create a service here is to provide the flexibility to change the implementation of how you retrieve the settings later. For example, you might choose to read the settings from a JSON file, or make a Web API call to get the settings. Any method that calls this service always makes the same call regardless of where those settings are stored. The calling methods don’t know if the implementation changes; they still receive the same settings class.
The Consumer of the Settings Service
Now that you have a simple application settings service class created that can return an instance of the AppSettings class, test it by creating a consumer. You can build a unit test to call the service, but let’s use something a little more real-world. In this section of the article, build a Product class, a Product HTML page, and a Product component class to add a new product. You’re going to set default values from the AppSettings into the appropriate properties of a Product object, then display those properties on the HTML page.
Add a new folder named \product under the \src\app folder. Add a new file in this folder named product.ts. Add a few properties to this class to represent the typical properties associated with a product, as shown in the following code snippet.
export class Product { productId: number; productName: string; introductionDate: Date; price: number; url: string; }
Create an HTML page to which you can bind the properties of the Product class. Add another new file within the \product folder and name it product-detail.component.html. Add the code shown in Listing 1.
Create a component class to work with the HTML page you just created. Add a new file within the \product folder named product-detail.component.ts. Add the appropriate import statements to create the component class. Import the Component class and the OnInit interface. You also need the Product class you just created. Finally, import the AppSettings and AppSettingsService classes in this new component class.
import { Component, OnInit } from "@angular/core"; import { Product } from './product'; import { AppSettings } from '../shared/appsettings'; import { AppSettingsService } from '../shared/appsettings.service';
Each component class you create must be decorated with the @Component() decorator function. Pass in an object to this function to supply some meta-data about how this component is used. The object passed in creates a selector property set to the value product-detail. This value is used as a new element on an HTML page to invoke this component class. You also specify the location of the HTML page you just created by using the templateUrl property.
@Component({ selector: "product-detail", templateUrl: "./product-detail.component.html" })
After the @Component decorator, export a class named ProductDetailComponent and make sure that it implements the OnInit interface. The ngOnInit() function is called after the ProductDetailComponent class is instantiated. Just write an empty ngOnInit() method for now, and a saveProduct() method too. The saveProduct() method is called from the button in the HTML you created previously.
export class ProductDetailComponent implements OnInit { ngOnInit(): void { } saveProduct(): void { } }
Next, add a constructor to the ProductDetailComponent class. Into this constructor, add a private variable named appSettingsService that’s of the type AppSettingsService. Adding this code in the constructor tells Angular to inject an instance of the service into this class when it creates an instance of this component.
constructor(private appSettingsService: AppSettingsService) { }
In the HTML created earlier, you referenced a product object. Create that property named product now. Also, create a property to assign to the global settings retrieved from the AppSettingsService.
product: Product; settings: AppSettings;
Write the code within the ngOnInit() method to call the getSettings() method on the appSettingsService object. The subscribe method calls getSettings() and receives the settings back from the service. Subscribe has three parameters: a success function, an error function, and a completed function. In the success function, set the result returned to the settings property in this class. Because all you’re doing is returning a new instance of a class, you can safely ignore the error function. In the completed function, create a new instance of the Product object and assign the price and URL properties to the defaults returned from the settings object.
ngOnInit(): void { this.appSettingsService.getSettings() .subscribe(settings => this.settings = settings, () => null, () => { this.product = new Product(); this.product.price = this.settings.defaultPrice; this.product.url = this.settings.defaultUrl; }); }
Update AppModule
For your components and services to work within your application, you need to inform Angular of the service and the component. Open the app.module.ts file and add the following import statements.
import { FormsModule } from '@angular/forms'; import { ProductDetailComponent } from "./product/product-detail.component"; import { AppSettingsService } from "./shared/appsettings.service";
After you have the appropriate classes imported, modify the metadata properties of the @NgModule decorator function. Add the FormsModule class to the imports property. Add the ProductDetailComponent to the declarations property. Modify the providers property and add the AppSettingsService to the array. The @NgModule decorator function should now look like the following code snippet.
@NgModule({ declarations: [AppComponent, ProductDetailComponent], imports: [BrowserModule, FormsModule], providers: [AppSettingsService], bootstrap: [AppComponent] })
Update AppComponent
Open the app.component.html file and delete all the code in this HTML file. Replace the code with the following.
<h1>Configuration Settings Sample</h1> <div> <product-detail></product-detail> </div>
Save all the changes to all your open files. Switch your open browser on the URL and you should see your product page appear. Because the product property is bound to the different fields on the HTML page you created, the values are displayed automatically on the page, as shown in Figure 2.
Get Settings from a JSON File
In the previous sample, you hard-coded values into the AppSettings class. Instead of hard-coding the settings values, let’s place those settings into a JSON file. If you don’t already have one, create a folder called \assets under the \src folder. Add a JSON file named appsettings.json in the \assets folder. Add the following code into this file:
{ "defaultUrl": "", "defaultPrice": 2 }
Modify AppSettingsService Class
Open the appsettings.service.ts file and add a few more import statements.
import { Http } from '@angular/http'; import 'rxjs/add/operator/map'; import 'rxjs/add/operator/catch';
Import the Http class from @angular/http. Import the ReactiveJS operators map and catch. Modify the getSettings() method to call the http.get() method, passing in the path to the JSON file you created. Extract the response in the map() function and either return the JSON object from the response or an empty object. In the catch method, call a method named handleErrors(). The complete listing of the code that should be contained in the appsettings.service.ts file is shown in Listing 2.
Handling Exceptions
If the JSON settings file cannot be found, or if some other exception happens, the handleErrors() method is called. In addition to returning an AppSettings object, you might want to record the error information somewhere. In my last article, I created a logging system for Angular; use that if you want. For this article, just log the error to the console window. Regardless, the handleErrors() method returns an instance of an AppSettings class. You don’t want your application to fail just because you can’t get some specific global settings. So, return an instance of AppSettings with the appropriate defaults set. Make your handleErrors() method look like the following code snippet.
private handleErrors(error: any): Observable<AppSettings> { // Log the error to the console switch (error.status) { case 404: console.error("Can't find file: " + SETTINGS_LOCATION); break; default: console.error(error); break; } // Return default configuration values return Observable.of<AppSettings> (new AppSettings()); }
Update AppModule
If you don’t already have it in your application, import the HttpModule from @angular/http in your app.module.ts file. Ensure that the HttpModule is listed in the imports property in the @NgModule() decorator function as well.
Save all the changes to the files you made. Go to the browser and you should now see the values from the JSON file appear in the price and the URL fields.
Using Local Storage
Retrieving the settings from the AppSettings class is great, but the user can’t change those settings. To do that, you must store those settings somewhere. All modern browsers allow you to store key/pair values into a local storage area that persists between browser sessions. This storage is ideal for the global setting values presented in this article.
Saving Data into Local Storage
Open the appsettings.service.ts file and add a constant at the top of this file called SETTINGS_KEY. This constant contains the key value used for retrieving, storing, or deleting values in local storage.
const SETTINGS_KEY = "configuration";
Create a saveSettings() method that accepts an instance of an AppSettings class. Call the setItem() method on the localStorage object, passing in the key value contained in the SETTINGS_KEY constant as the first parameter and the AppSettings object as the second parameter. You need to stringify the AppSettings object in order to convert the JSON object to its appropriate string representation.
saveSettings(settings: AppSettings) { localStorage.setItem(SETTINGS_KEY, JSON.stringify(settings)); }
To test the new saveSettings() method, add a new button to the product-detail.component.html page. Call a method named saveSettings() from the click event of this button by writing the following code:
<button (click)="saveSettings()"> Save Settings </button>
Open the product-detail.component.ts file and add the saveSettings() method to be called from the click event on the button.
saveSettings(): void { this.settings.defaultPrice = this.product.price; this.settings.defaultUrl = this.product.url; this.appSettingsService .saveSettings(this.settings); }
In the saveDefaults() method, take the bound product properties and move them into the appropriate properties in the settings property of the ProductDetailComponent class. Call the saveSettings() method of the appSettingsService class that was injected by Angular into your ProductDetailComponent class.
Retrieve Settings and Store into Local Storage
Now that you have the ability to store values into local storage, you need to modify the getSettings() method in the appsettings.service.ts file to attempt to retrieve those values from local storage. If the values are found in local storage, return those values; otherwise return the values from the JSON file. Modify the getSettings() method to look like Listing 3.
The getSettings() method attempts to get the settings object from local storage by passing the SETTINGS_KEY value to the getItem() method. If the variable named settings contains a value, create an AppSettings object using JSON.parse() and returning an Observable of the AppSettings object.
If nothing is found in local storage, retrieve the values from the file using the http.get() method. If the values are found in the file, save them into local storage by calling the saveSettings() method. After the first time calling the getSettings() method, succeeding calls to this method retrieve values from local storage. You only get the default values from the JSON file the first time.
Delete Settings
In many applications, the user can reset back to factory defaults. To accomplish the same thing in your Angular application, delete the values stored in local storage. If you delete all of the values, then the next time the getSettings() method is called, the original values from the JSON file are read. Add a deleteSettings() method to the AppSettingsService class.
deleteSettings(): void { localStorage.removeItem(SETTINGS_KEY); }
Because the complete AppSettings object is stored within the one key in local storage, call the removeItem() method and pass in the SETTINGS_KEY constant to erase all the settings. To test this method, add a new button to the product-detail.component.html page in the sample.
<button (click)="deleteSettings()"> Delete Settings </button>
Open the product-detail.component.ts file and add the deleteSettings() method. In this method, call the deleteSettings() method on the appSettingsService object that was injected into this component.
deleteSettings(): void { this.appSettingsService.deleteSettings(); }
Save all of your changes, switch back to your browser, and you should see your two new buttons appear on the screen. Try modifying the price and URL fields and click the Save Settings button. Refresh the page to ensure that the new values are retrieved from local storage. Next, try clicking on the Delete Settings button to delete the values from local storage. Refresh the page again to ensure that the values are retrieved from the JSON file.
Create Web API for Configuration Settings.
- Create a new Web API project in Visual Studio
- Create a SQL Server table named AppSettings
- Create Entity Framework classes to retrieve data from the SQL table
- Add a C# ConfigController class
- Enable Cross-Origin Resource Sharing (CORS)
- Convert C# PascalCase property names to JSON camelCase property names
Create a Visual Studio Web API Project
Open Visual Studio 2017 and select File > New > Project… from the main menu. From the New Project dialog, select Visual C# > Web > ASP.NET Web Application (.NET Framework). Set the name to ConfigWebApi and click the OK button. In the New ASP.NET Web Application dialog, select the Web API template and click the OK button. After a minute or two, you’ll have a new Web API project created.
Create SQL Server Table
Create a table named AppSettings on an available SQL Server. This table should have column names that match the Angular AppSettings class..
CREATE TABLE dbo.AppSettings( AppSettingsId int NOT NULL CONSTRAINT PK_AppSettings PRIMARY KEY NONCLUSTERED IDENTITY(1,1), DefaultUrl nvarchar(255) NULL, DefaultPrice money NULL CONSTRAINT DF_AppSettings_DefaultPrice DEFAULT ((1)) )
After creating your table, add a new record to the table with some default values such as the following:
INSERT INTO AppSettings(DefaultUrl, DefaultPrice) VALUES ('', 42);
Create Entity Framework Classes
Right mouse-click on the Models folder in your Visual Studio project and choose Add > ADO.NET Entity Data Model… from the context-sensitive menu. Set the name of the data model to ConfigSample. Select Code First from Database from the first page of the Entity Data Model Wizard dialog and click the Next button. Create a new connection to the SQL Server and select the database where you created the AppSettings table and click the Next button. Drill down to the AppSettings table you created and click the Finish button. After about a second, two Entity Framework classes have been generated and added to your Models folder.
ConfigController Class
You’re now ready to write your Web API controller and a method to retrieve the AppSettings values in your SQL Server table. Right mouse-click on the Controllers folder and select Add > Web API Controller Class (v2.1) from the menu. Set the name of this new controller to ConfigController and click the OK button. Add a using statement at the top of this file.
using ConfigWebApi.Models;
Delete all the methods within this class, as you don’t need them. Add a Get() method, shown in Listing 4, to retrieve the settings from the SQL Server table. When writing a Web API method, return an object that implements the IHttpActionResult interface. The ApiController class has a few methods you call to create a return object. The Get() method creates an instance of the ConfigSample DbContext. Use the FirstOrDefault() method on the AppSettings collection to retrieve the first record in the AppSettings table. If a null is returned from this method, set the return variable to NotFound(). If a record is found, call the Ok() method and pass the settings variable into this method so that the values are returned from this method. If an error occurs, return an InternalServerError object.
Enable CORS
Because you created a new Visual Studio application to run your Web API method, it’s running in a different domain from your Angular application. For your Angular application to call this Web API you must tell the Web API that you’re allowing Cross-Origin Resource Sharing (CORS). Right-mouse click on your Web API project and select Manage NuGet Packages… Click on the Browse tab and search for cors as shown in Figure 3. Install this package into your project.
After CORS is installed into your project, open the \App_Start\WebApiConfig.cs file and add the following line of code in the Register() method:
public static void Register(HttpConfiguration config) { config.EnableCors(); ... }
Go back to your ConfigController class and add the following using statement:
using System.Web.Http.Cors;
Add the EnableCors() attribute just above your ConfigController class. You can get specific on the origins, headers, and methods properties to restrict access to only your Angular application if you want. For the purposes of this article, just set them to accept all requests by specifying any asterisk for each, as shown in the following code snippet:
[EnableCors(origins: "*", headers: "*", methods: "*")] public class ConfigController : ApiController { ... }
Convert C# Class to JSON
C# property names are generally created using Pascal casing. Pascal casing means each letter of each word in the property name is initial-capitalized. The convention for naming properties in JavaScript and TypeScript is to use camel casing. Camel casing is where each word in the property name is initial capitalized except for the first word. So, the DefaultUrl property name in C# is expressed as defaultUrl in TypeScript. When the Web API serializes the AppSettings class to return it to your Angular application, it takes the property names of C# and translates them directly into JSON. You can change the serializer so that the Web API converts the Pascal-cased C# property names into camel-cased TypeScript property names.
Open the WebApiConfig.cs file and add a couple of using statements at the top of this file.
using System.Net.Http.Formatting; using Newtonsoft.Json.Serialization;
Locate the Register() method and just below the first comment, add the lines shown in the following code snippet. Leave the rest of the code the same in the Register() method.
public static void Register( HttpConfiguration config) { // Convert to camelCase var jsonFormatter = config.Formatters .OfType<JsonMediaTypeFormatter>() .FirstOrDefault(); jsonFormatter.SerializerSettings .ContractResolver = new CamelCasePropertyNamesContractResolver(); // Rest of the code below here }
Run your Web API project and when it’s running in the browser, copy the URL from the address line to the clipboard. You’re going to need this address for your Angular service.
Modify AppSettingsService
Switch back to your Angular project in Visual Studio Code. Open the appsettings.service.ts file and locate the constant SETTINGS_LOCATION. Replace the contents of the value with what’s in your clipboard. Then, add on "api/config" to the end. Your constant should now look like the following (with a different port number, of course).
const SETTINGS_LOCATION = "";
Switch to the browser that’s running your Angular project. Click on the Delete Defaults button to delete the previous settings. Refresh your browser, and you should now see the values returned from your Web API call in the appropriate fields on the Web page.
Summary
In this article, you learned an approach for handling application-wide settings for Angular applications. A service approach is the most flexible approach for providing settings to any other class in your application. You can choose to store your settings in a class or in an external JSON file, or make a call to a Web API to retrieve the values. Store the settings retrieved into local storage to allow your users to modify those settings, if desired. If you delete the values from local storage, you allow your user to revert to the original default settings. | https://www.codemag.com/Article/1801021/Configuration-Settings-for-Angular-Applications | CC-MAIN-2020-10 | refinedweb | 3,954 | 56.35 |
/* Interface to functionsSCOPE_H #define MACROSCOPE_H #include "macrotab.h" #include "symtab.h" /* All the information we need to decide which macro definitions are in scope: a source file (either a main source file or an #inclusion), and a line number in that file. */ struct macro_scope { struct macro_source_file *file; int line; }; /* Return a `struct macro_scope' object corresponding to the symtab and line given in SAL. If we have no macro information for that location, or if SAL's pc is zero, return zero. */ struct macro_scope *sal_macro_scope (struct symtab_and_line sal); /* Return a `struct macro_scope' object describing the scope the `macro expand' and `macro expand-once' commands should use for looking up macros. If we have a selected frame, this is the source location of its PC; otherwise, this is the last listing position. If we have no macro information for the current location, return zero. The object returned is allocated using xmalloc; the caller is responsible for freeing it. */ struct macro_scope *default_macro_scope (void); /* Look up the definition of the macro named NAME in scope at the source location given by BATON, which must be a pointer to a `struct macro_scope' structure. This function is suitable for use as a macro_lookup_ftype function. */ struct macro_definition *standard_macro_lookup (const char *name, void *baton); #endif /* MACROSCOPE_H */ | http://opensource.apple.com/source/gdb/gdb-1344/src/gdb/macroscope.h | CC-MAIN-2015-06 | refinedweb | 208 | 62.68 |
Top: Networking: ipstmserver
#include <pinet.h> class ipstmserver { ipstmserver(); int bind(ipaddress ip, int port); int bindall(int port); bool poll(int bindnum = -1, int timeout = 0); bool serve(ipstream& client, int bindnum = -1, int timeout = -1); virtual void sockopt(int socket); }
The ipstmserver class is used on the server side of a stream-oriented client-server application. It bounds itself to a specified port/address and waits until a connection request is received from a client host. For each connection request a server application performs some actions and returns to the waiting state. For better performance your daemon may start a new thread for each client connection.
Ipstmserver can generate exceptions of type (estream*) with a corresponding error code and a message string.
ipstmserver::ipstmserver() constructs an ipstmserver object.
int ipstmserver::bind(ipaddress ip, int port) binds the server to the specified local IP address and port number. This function can be called multiple times for different local addresses and port numbers. Bind() returns a value that can be used later in call to poll() and serve() as the parameter bindnum.
int ipstmserver::bindall(int port) binds the server to all local IP addresses on the specified port number. Can be called multiple times for different port numbers. Bindall() returns a value that can be used later in call to poll() and serve() as the parameter bindnum.
bool ipstmserver::poll(int bindnum = -1, int timeout = 0) polls the listening sockets for connection requests. Bindnum specifies the socket number reutrned by bind() or bindall(). If bindnum is -1 poll() tests all sockets. The second parameter timeout specifies the amount of time in milliseconds to wait for a connection request. If timeout is 0 poll() returns immediately; if it's -1 poll() waits infinitely. This function returns true if there is a new connection request waiting for processing.
bool ipstmserver::serve(ipstream& client, int bindnum = -1, int timeout = -1) polls the specified bound sockets for connection requests. If there is a connection request, serve() opens and prepares the supplied ipstream object for communicating with the client, i.e. client will be active upon return from serve() and will contain the peer IP address and the port number. The meanings of bindnum and timeout are the same as for poll() except that the default value for timeout in this case is -1, i.e. wait infinitely. This function returns true if there is a new connection request and client is active, or false if the call has timed out.
virtual void ipstmserver::sockopt(int socket) - override this method in a descendant class if you want to set up additional socket options (normally, by calling setsockopt()).
See also: ipstream, Utilities, Examples | http://melikyan.com/ptypes/doc/inet.ipstmserver.html | crawl-001 | refinedweb | 448 | 54.93 |
Davaris 118 Report post Posted December 12, 2001 I saw a few people were having trouble with it. So until Cone3D writes one of his tutes heres my main file. You need SDL_ttf and you can get it from the library sectin at. Thanks for the corrections Drizzt. #include "SDL.h" #include "SDL_ttf.h" SDL_Surface *screen; void DrawIMG(SDL_Surface *img, int x, int y) { SDL_Rect dest; dest.x = x; dest.y = y; SDL_BlitSurface(img, NULL, screen, &dest); } int main( int argc, char* argv[] ) { TTF_Font* font; //initialize systems if (SDL_Init(SDL_INIT_VIDEO) <0) { printf("Unable to init SDL: %s\n", SDL_GetError()); return 1; } //set our at exit function atexit ( SDL_Quit ); //create a window screen = SDL_SetVideoMode ( 640, 480, 16, SDL_FULLSCREEN); if (screen == NULL) { printf("Unable to set 640x480 video: %s\n", SDL_GetError()); return 1; } TTF_Init (); // Need the .ttf file in the same directory as // the .exe or give it the full path. font = TTF_OpenFont("Antqua.ttf", 20); TTF_SetFontStyle (font, TTF_STYLE_NORMAL); SDL_Color fg = {255,255,255,0}; SDL_Color bg = {255,0,255,0}; SDL_Surface *text1; SDL_Surface *text2; text1 = TTF_RenderText_Solid(font, "Hello World 1", fg); text2 = TTF_RenderText_Shaded(font, "Hello World 2", fg, bg); int done = 0; //declare event variable SDL_Event event ; //message pump while(done == 0) { DrawIMG(text1, 50, 100); DrawIMG(text2, 200, 200); //look for an event while ( SDL_PollEvent ( &event ) ) { //an event was found if ( event.type == SDL_KEYDOWN ) { if ( event.key.keysym.sym == SDLK_ESCAPE ) { done = 1; } } } SDL_Flip(screen); }//end of message pump //done TTF_CloseFont(font); // Don't forget to release the two text surfaces. // I forgot the name of the function return ( 0 ) ; } Edited by - Davaris on December 12, 2001 4:15:03 PM 0 Share this post Link to post Share on other sites | https://www.gamedev.net/forums/topic/70657-this-is-how-i-use-sdl-text/ | CC-MAIN-2017-34 | refinedweb | 281 | 70.43 |
I'm confused as to when each comes into play and what their exact function is. I'm reading an article that claims that a TextBox's internal Text property would have this getter/setter method.
public string Text { get { return (string)ViewState["Text"]; } set { ViewState["Text"] = value; } }
The article goes on to say that "A very common conceptual error is the thought that Viewstate is responsible for preserving posted data. This is absolutely false; Viewstate has nothing to do with it."
Well, lets say I have a bunch of checkboxes which are added to the aspx page dynamically in C#. When a button is clicked in the browser, I want the state of the checkboxes in the browser to be saved (I want to remember the 'checked/unchecked' state across postbacks). Does ViewState do this or does Loadpostbackdata do this?
thanks. | https://www.daniweb.com/programming/web-development/threads/330343/viewstate-vs-loadpostbackdata | CC-MAIN-2017-43 | refinedweb | 142 | 60.55 |
14 March 2011 22:09 [Source: ICIS news]
HOUSTON (ICIS)--Here is Monday’s end of day ?xml:namespace>
CRUDE: Apr WTI: $101.19/bbl, up 3 cents; Apr Brent: $113.67/bbl, down 17 cents
WTI crude futures experienced a wild ride, whipsawing in response to the catastrophe in
RBOB: Apr: $2.9603/gal, down 2.74 cents
Reformulated gasoline blendstock for oxygenate blending (RBOB) futures fell as the US dollar gained strength versus a basket of currencies. The 9.0 magnitude earthquake that hit offshore
NATURAL GAS: Apr: $3.914/MMBtu, up 2.5 cents
The front-month natural gas futures contract increased to start the week but fell from midday prices above the $4.00/MMBtu level. The pricing strength came from speculation that spot cargoes to
ETHANE: up at 64.5-67.0 cents/gal
Mont Belvieu ethane prices strengthened amid stronger demand, recovering from a week of choppy trading, according to traders.
AROMATICS: benzene up at $4.09-4.11/gal DDP
US Gulf benzene traded several times at $4.10/gal DDP (delivery, duty paid) for March delivery, up 15 cents from a deal on 10 March.
OLEFINS: ethylene up at 54.0-55.5 cents/lb
Spot ethylene for March delivery was discussed in a higher range. No deals could be confirmed. | http://www.icis.com/Articles/2011/03/14/9443813/evening-snapshot-americas-markets-summary.html | CC-MAIN-2015-11 | refinedweb | 219 | 68.77 |
01-14-2011 07:00 PM
How to pass data to " TD1 **TD1Hdl" data type from C# or C++?
My LabVIEW DLL has created the following *.h file.
#include "extcode.h"
#pragma pack(push)
#pragma pack(1)
#ifdef __cplusplus
extern "C" {
#endif
typedef struct {
int32_t dimSize;
LStrHandle OneString[1];
} TD1;
typedef TD1 **TD1Hdl;
void __cdecl StartApplet(TD1Hdl *OneArrayOfStrings, TD1Hdl *OneMoreArrayOfStrings);
void __cdecl GetWndRect(int16_t *left, int16_t *top, int16_t *right, int16_t *bottom);
long __cdecl LVDLLStatus(char *errStr, int errStrLen, void *module);
#ifdef __cplusplus
} // extern "C"
#endif
#pragma pack(pop)
01-17-2011 05:10 PM
Hey Sandeepvd,
Are you trying to access a DLL created in a different language? One way you could do this is by using the Call Library Function Node.
Please expand upon what you would like this program to do.
Ricky
01-23-2011 02:02 AM
Thanks.
What I am trying to do is to pass the parameters to LabVIEW DLL (from a C#, C++ DLL) as 2 arrays of strings.
That's where LabVIEW creates the function definitions mentioned.
Thanks for CallLibraryFunctionNode. I am unsure if it is applicable. Since LabVIEW is not a parent in my case. It is a (grand)child which gets called from a child (C#/C++).
01-24-2011 05:59 PM
Hey Sandeepvd,
So just to clarify, are you wanting to create a LabVIEW DLL? And then with this DLL created from LabVIEW you want to pass parameters that originate from a third language, go to C# and then call this LabVIEW DLL?
Ricky
01-26-2011 10:45 PM
This is correct.
Thank you.
01-28-2011 08:40 AM
Hey Sandeepvd,
Take a look at this Knowledge Base Article: How Can I Pass a Multidimensional Array from Visual Basic to a LabVIEW-Built...
I hope this can help.
Ricky
03-09-2011 08:55 PM
I have run into a similar problem while writing a program using a LabVIEW generated DLL and using it in C# (VS2010). The knowledgebase article gives no useful information and is incomplete in what information it provides.
Is there any sample code or other resources relating to the subject?
03-10-2011 02:45 PM
Hey mgerceker,
I have not been able to find much about using a LabVIEW DLL with C# (VS2010), but I did find an example using VB.NET to call LabVIEW DLL's. I hope this can be of some assistance, perhaps a baseline to see how it can be done.
NI Developer Zone: Simple Example Using VB.NET to Call LabVIEW DlLs
NI Developer Zone: Using VB.NET to Call LabVIEW DLLs That Use 2D Numeric Arrays
Ricky
03-14-2011 07:42 PM
Ricky,
Unfortunately I do not have access to anything older than LV8.6 and cannot use the build scripts for them. Furthermore the pointer access scheme in C# requires the use of manual Marshalling, which makes it harder to deal with handles. I have solved my problem by reshaping my arrays in wrapper functions before and after the real function call, but I am sure there'll be consequences as far as performance is concerned.
Thank you,
Mehmet | https://forums.ni.com/t5/LabVIEW/How-to-pass-data-to-amp-quot-TD1-TD1Hdl-amp-quot-data-type-from/td-p/1415620?profile.language=en | CC-MAIN-2019-47 | refinedweb | 525 | 63.7 |
Introduction: Tutorial to Interface HX711 Balance Module With Load Cell
Description
This.
Specification
Two selectable differential input channels
On-chip power supply regulator for load-cell and ADC analog power supply
On-chip oscillator requiring no external component with optional external crystal
On-chip power-on-reset
Data Accuracy: 24 bit (24 bit analog-to-digital converter chip)
Refresh Frequency: 10/80 Hz
Operation supply voltage range: 4.8 ~ 5.5V
Operation supply Current: 1.6mA
Operation temperature range: -20 ~ +85℃
Demension: Approx. 36mm x 21mm x 4mm / 1.42" x 0.83" x 0.16"
Step 1: Material Preparation
In this tutorial, you will need :
1. Arduino Uno Board and USB
2. HX711 Balance Sensor
3. Load Cell (can be any weight of load cell ie 20KG, 60KG or 100KG)
4. Male Female Jumpers
5. Crocodile Clip Wires
5. Arduino IDE
Step 2: HX711 Pin Description
Step 3: Load Cell Wire Connection
The four wires coming out from the wheatstone bridge on the load cell are usually :
- Excitation+ (E+) or VCC is red
- Excitation- (E-) or ground is black
- Output+ (O+), Signal+ (S+)+ or Amplifier+ (A+) is white
- Output- (O-), Signal- (S-)+ or Amplifier- (A-) is green
Step 4: Hardware Installation
HX711 to Arduino Uno :
- VCC to 5V
- GND TO GND
- SCK to D5
- DT TO D6
Load Cell to HX711
- E+ : RED
- E- : BLACK
- A- : WHITE
- A+ : GREEN
Then, connect your Arduino Uno Board to your Computer via USB.
Step 5: HX711 Library
Communicating with the Balance Module requires a driver for the HX711 sensor. The simplest way to install the driver is to download the HX711 library. Download the ZIP file below > Open Zip File > Extract to your Arduino Uno Library folder. Refer the image above for your references.
Step 6: Sample Source Code.
Step 7: Serial Monitor
When you has succesfully uploaded the sample source code into your Arduino Uno Board. Open Serial Monitor and it will show u as shown in the picture above.
Step 8: Result
when the serial monitor give u a value for reading, it means that u has succesfully interface your load cell. Now, you can set your own calibration factor by adjusting the value using the '+' or 'a' to increase the value OR '-' or 'z' to decrease the value. You have to calibrate only once for each load cell.
NOTE : This tutorial only show you on how to interface HX711 with load cell. We did not use the correct calibration factor. You have to set your own calibration factor for your load cell. Check on this video and tutorial to learn on how to set the calibration factor for load cells. Remember that each load cell with different weight ie. load cell 20KG, 60Kg and 100KG have different value of calibration factor. Thus, you will have to set calibration factor for each load cell with different weight.
Step 9: Videos
This video shows how to interface HX711 Balance Module with Load Cell.
3 People Made This Project!
- michelrogermayoud made it!
- mujahidsaeed made it!
Recommendations
31 Comments
Tip 1 year ago
Load cell
Question 1 year ago
I was interfacing this module with raspberry pi. Sometimes I am getting the wrong value(getting value without load and value is continuously changing).
Anyone having tips for this problem please help me.
Thanks in advance
1 year ago
I will start making this project from tomorrow. I'll brief you in couple of days. Thx.
2 years ago on Introduction
Is this can be use for 5ton of loadcell?
Question 4 years ago
hello.. i have difficulties with my loadcell from step 9 .. i dont actually understand how the software part functions... when i run the code i get "0.00g" on the serial monitor and the value does not changes no matter the load i use on the loadcell....
Answer 2 years ago
You probably don't care about this anymore, but the load cell could be the wrong way around.
I just had this issue and was messing around with the calibration factor for an hour before realizing it was the other way around.
Answer 3 years ago
I am having the same issue
Reply 3 years ago
Have you found an answer to this problem? Have tried different pins, etc and am still having no luck.
5 years ago
Do you have a specific reason for connecting the white and green wire of the load cell the way you did?
In your instructions, the white wire "Output+ (O+), Signal+ (S+)+ or Amplifier+ (A+)" is connected to pin 3 (A-) in J1, which goes to pin 7 (INNA, Ch. A Negative Input) of HX711, and the green wire "Output- (O-), Signal- (S-)+ or Amplifier- (A-)" is connected to pin 4 (A+) in J1, which goes to pin 8 (INPA, Ch. A Positive Input) of HX711.
Also, what do the notations "(S+)+" and "(S-)+" mean? Typo/copy-paste error?
Reply 2 years ago
are just backwards. Other tutorials have them swapped. It’s easy to get them swapped because the boards I have are in the order E+, E-, A-, A+ instead of the more natural +-+- order.
2 years ago
Thanks for the instructable - it has helped me get going quickly with my project using the HX711. One thing though: I was slowed down by the use of the .rar compressed file for the example code. I don't have any program on my PC that extracts that type of file. This is just a few dozen lines of code, so I don't understand why it would need to be compressed. Simply inserting the text so someone could copy and paste would be easiest. Next easiest would be to make the uncompressed Arduino .ino file itself available for download (<2KB). Luckily I found this free online RAR extractor web page/service:
...that did the job for me.
Reply 2 years ago
In fact, to make it even easier for others (and myself, should I need it again), I'll just paste in the code contained HX711.rar right here:
/*
Setup your scale and start the sketch WITHOUT a weight on the scale
Once readings are displayed place the weight on the scale
Press +/- or a/z to adjust the calibration_factor until the output readings match the known weight
Arduino pin 6 -> HX711 CLK
Arduino pin 5 -> HX711 DOUT
Arduino pin 5V -> HX711 VCC
Arduino pin GND -> HX711 GND
*/
#include "HX711.h"
HX711 scale(5, 6);
float calibration_factor = 20; // this calibration factor is adjusted according to my load cell
float units;
float ounces;: ");
units = scale.get_units(), 10;
if (units < 0)
{
units = 0.00;
}
ounces = units * 0.035274;
Serial.print(units);
Serial.print(" grams");
Serial.print(" calibration_factor: ");
Serial.print(calibration_factor);
Serial.println();
if(Serial.available())
{
char temp = Serial.read();
if(temp == '+' || temp == 'a')
calibration_factor += 1;
else if(temp == '-' || temp == 'z')
calibration_factor -= 1;
}
}
Reply 2 years ago
I can suggest you to download 7zip, it's an open source software and handels pretty much every kind of compression (even tar.gz and tar) and is available on all computer platforms.
Cheers, happy making
Question 2 years ago on Step 3
How can I check lod cell , Defective or right condition please help...!!!
Question 2 years ago
Do someone know a code to put the hx711 to an graph bar with neopixels ?
Question 2 years ago on Step 9
I’m soldering my hx711 to the load cell and checking continuity between a- and a+ and e- and e+. Am I wrong that I should have continuity
Question 4 years ago on Step 5
Sir I need hx711 adc converter weighing load cell with 8051 ,I need both assembly ,c language code hole program ,because I m beginner so please
Answer 2 years ago
Answer 4 years ago
Please help me
Question 3 years ago
thanks in advance for somebody regarding helping me.i am using 10kg straight bar type simple load cell..... help me in full detail how to calibrate that. i am a new beginner. plz any body help me | https://www.instructables.com/How-to-Interface-HX711-Balance-Module-With-Load-Ce/ | CC-MAIN-2022-33 | refinedweb | 1,337 | 73.17 |
Opened 6 years ago
Closed 6 years ago
Last modified 6 years ago
#11840 closed (worksforme)
Form validation should check for IOErrors
Description
Forms currently break on various types of images when PIL raises an IOError exception. This should be handled as a form error instead of crashing Django. A traceback is below:
File "main/views.py", line 244, in ad_create_category
if form.is_valid() and image_formset.is_valid():
File "lib/python2.6/site-packages/django/forms/formsets.py", line 237, in is_valid
if bool(self.errors[i]):
File "lib/python2.6/site-packages/django/forms/formsets.py", line 211, in _get_errors
self.full_clean()
File "lib/python2.6/site-packages/django/forms/formsets.py", line 250, in full_clean
self._errors.append(form.errors)
File "lib/python2.6/site-packages/django/forms/forms.py", line 111, in _get_errors
self.full_clean()
File "lib/python2.6/site-packages/django/forms/forms.py", line 238, in full_clean
value = field.clean(value, initial)
File "lib/python2.6/site-packages/django/forms/fields.py", line 511, in clean
trial_image.load()
File "lib/python2.6/site-packages/PIL/ImageFile.py", line 155, in load
self.load_prepare()
File "lib/python2.6/site-packages/PIL/PngImagePlugin.py", line 337, in load_prepare
raise IOError("cannot read interlaced PNG files")
Change History (4)
comment:1 Changed 6 years ago by Honza_Kral
- Component changed from Uncategorized to Forms
- Needs documentation unset
- Needs tests set
- Patch needs improvement unset
- Resolution set to worksforme
- Status changed from new to closed
comment:2 Changed 6 years ago by stavros
Sadly, I've searched high and low for such a file, but none of my users has responded. Looking at your code it is indeed miraculous that this happens, but looking at mine elucidates the situation:
def clean(self, data, initial=None): ... # load() is the only method that can spot a truncated JPEG, # but it cannot be called sanely after verify() trial_image = Image.open(file) trial_image.load()
There's no exception block to be found there. I assume it was added after 1.1?
comment:3 Changed 6 years ago by Honza_Kral
No, the try: except: block is present in the 1.1 tarball, just checked just to be sure.
comment:4 Changed 6 years ago by stavros
That's odd, I checked my home install and it has it, but both production and staging don't, and django.VERSION reports 1.1.0 there. This is positively strange, but I upgraded and everything is fine now, thank you.
can you provide a test for that? or just a file that results in this error? Looking through the code that shouldn't happen:
I am closing it as works for me, feel free to reopen it if you find a file that results in this error. Thanks! | https://code.djangoproject.com/ticket/11840 | CC-MAIN-2016-07 | refinedweb | 459 | 60.41 |
On May 1, 2008, at 4:51 PM, glyph@divmod.com wrote:
I'm not on distutils-sig, but this is probably of little interest to python-dev. Please Cc: me if you think my continued input would be useful to this discussion.
On 08:25 pm, zooko@zooko.com wrote:
almost always in the phrase, "please do not use distutils to do a system install of Twisted, use the specific package for your platform".
This is a tangent, but why do you give that advice? I typically give people the opposite advice on how to install Twisted.
The #1 reason:
- distutils does not provide an uninstaller.
This means that a user who has installed a Python library - but especially a package like Twisted, which uses a shared namespace with other libraries that use it, twisted.plugins - can't easily get rid of it. I only ever use 'setup.py' in conjunction with '-- prefix'; in my opinion, the *default* behavior of distutils should be "--prefix ~/.local".
Definitely not the only reason, though. Even if distutils had a great uninstaller, I still probably wouldn't recommend it...
- distutils will interfere with the system package manager,
potentially breaking it, by writing files to locations reserved for the system package manager (/usr, et. al.)
- distutils won't uninstall a system-installed version of the
package first, so if you use e.g. --force to overwrite your system files, you may end up with leftover system packaged (incompatible, earlier-version) plugins or modules which break your distutils install
- running arbitrary, non-vendor-supported code as root as a matter
of habit is, in my humble opinion, bad; distutils requires you to run as root for the default behavior to work. the system package manager typically requires root permission too, but at least it's the sort of thing which has been audited.
- not only can you not reverse the process, there's no way to *tell*
if distutils has crapped all over your system installation of a particular package
- setuptools causes seemingly random breakages (in packages which
support it), and I don't want to deal with bug reports from users related to packaging; packagers are capable of dealing with setuptools' interactions with the platform and creating a nice, neat bundle that works as expected.
- when you say "distutils", what do you mean? running 'setup.py'
from some random revision of trunk? doing 'sdist' from trunk, then install? Using operating-system packages at least suggests that you're using a release, or if not an actual release, you've gone through something approximating the actual release/build process that we suggest for users.
- if the user is installing for development anyway, and not for
deployment, then why bother doing any installation step at all? It's probably better to just drop an SVN checkout on PYTHONPATH somewhere.
And, finally...
- why bother having installers prepared for particular systems, if
they are not the preferred way of doing things? If and when distutils is ready to be the thing I will suggest to users, I imagine that we'll stop having operating system packages. (Of course, that begs the question why distutils would have commands like "bdist_wininst" - it's difficult to beat the native packages for convenience.)
These are very good arguments for not using distutils to install packages into a system Python.
Well said.
I'll note that I *never* use distutils that way. :) (I may be in the minority though.)
Jim
-- Jim Fulton Zope Corporation | https://mail.python.org/archives/list/distutils-sig@python.org/message/5P7PGTFY4OMSHNBOBSQO7O5ASREGC6UN/ | CC-MAIN-2021-49 | refinedweb | 581 | 54.02 |
How do I add a question to this page?
Anyone may edit this page to add their own content. That is why this page is part of a Wiki and not a hardcoded static file in the FAQ.
However, do not add questions without answers to this page. If you have a question about how to do something in Tomcat which has not been addressed yet, ask the tomcat-user list. Once you've figured out how to fix your problem, come back and update the Wiki to allow the rest of us to benefit from what you've learned!
How do I contribute to Tomcat's documentation?
Download the source bundle or grab the source XML file from Subversion repository. If you are not familiar with Subversion, see.
The docs are in the webapps/docs subdirectory. They are in XML format and get processed into the HTML documentation as part of the Tomcat release.
Edit the documentation XML file(s) as you wish. The xdocs format is self-explanatory: use normal HTML markup, and add <section> or <subsection> tags as you see fit. Look at the existing docs as examples. Make sure you use valid XML markup.
How do I set up another tomcat service on Windows, sharing the same Tomcat Home ?
This script sets up a a tomcat base directory and calls tomcat5.exe to create a windows service which will use the tomcat home given for the binaries and tomcat base you create See TomcatCreateWindowsService
How do I install Tomcat as a service under Unix?
Create a shell program to start Tomcat automatically. Each UNIX varies in how it starts up automatic services, but there are two main variants:
BSD::In a typical BSD system, there are a series of start up scripts in /etc starting with rc.. Look for, or create, a file called /etc/rc.local and enter the appropriate instructions to start up Tomcat there as a shell script.
System V::In a typical UNIX System V setup, there is a directory containing startup scripts, and other directories which contain links to these startup scripts. Create the appropriate startup script for your setup, then create the appropriate links.
For more information on each, check your system documentation.
It also makes a lot of sense to use the JavaServiceWrapper.
How.
Another method is to use SetUID scripts (assuming you have the capability) to do this. Here's how I do it.
Create a file called foo.c with this content (replace "/path/startupscript" with the tomcat startup script):
#include <unistd.h> #include <stdlib.h>
int main( int argc, char *argv[] ) {
- if ( setuid( 0 ) != 0 ) perror( "setuid() error" ); printf( "Starting ${APPLICATION}\n" ); execl( "/bin/sh", "sh", "/path/startupscript", 0 ); return 1;
}
Run the following as root (replacing tmp with whatever you want the startup script to be and replacing XXXXX with whatever group you want to be able to start and stop tomcat:
gcc tmp.c -o tmp chown root:XXXXX tmp chmod ugo-rwx tmp chmod u+rwxs,g+rx tmp
Now members of the tomcat group should be able to start and stop tomcat. One caveat though, you need to ensure that that your tomcat startup script is not writable by anyone other than root, otherwise your users will be able to insert commands into the script and have them run as root (very big security hole).
- A another way is to use Iptables to redirect Port 80 and 443 to user ports (>1024)
* /sbin/iptables -A FORWARD -p tcp --destination-port 443 -j ACCEPT
* /sbin/iptables -t nat -A PREROUTING -j REDIRECT -p tcp --destination-port 443 --to-ports 8443
* /sbin/iptables -A FORWARD -p tcp --destination-port 80 -j ACCEPT
* /sbin/iptables -t nat -A PREROUTING -j REDIRECT -p tcp --destination-port 80 --to-ports 8080
/sbin/iptables-save or /etc/init.d/iptables save.
BSD-based Unix systems such as Mac OS X use a tool similar to iptables, called ipfw (for Internet Protocol Fire Wall). This tool is similar in that it watches all network packets go by, and can apply rules to affect those packets, such as "port-forwarding" from port 80 to some other port such as Tomcat's default 8080. The syntax of the rules is different than iptables, but the same idea. For more info, google and read the man page. Here is one possible rule to do the port-forwarding:
sudo ipfw add 100 fwd 127.0.0.1,8080 tcp from any to any 80 in .
How do I configure Tomcat Connectors?
On the Tomcat FAQ, there is a list of Other Resources which should have information pointing you to the relevant pages.
Each connector has its own configuration, and its own set up. Check them for more information.
In particular, here are a number of locations for Tomcat Connectors:
Tomcat Connectors Documentation
Configuring Tomcat Connectors for Apache
AJP Connector in Tomcat 7 Configuration Reference
The following excellent article was written by Mladen Turk. He is a Developer and Consultant for JBoss Inc in Europe, where he is responsible for native integration. He is a long time commiter for Jakarta Tomcat Connectors, Apache Httpd and Apache Portable Runtime projects.
Fronting Tomcat with Apache or IIS - Best Practices
John Turner has an excellent page about Using Apache HTTP with Apache Tomcat. Several different connectors have been built, and some connector projects have been abandoned (so beware of old documentation).
How do I configure Tomcat to work with IIS and NTLM?
See TomcatNTLM ..
How do I enable Server Side Includes (SSI)?
See
How do I install the Administration web app?
Tomcat 5.5
If you install Tomcat 5.5 binaries, the Administration web app is not bundled with it; this describes how to add the Administration web app to your Tomcat 5.5 installation. (Tomcat 4.1 comes with the Administration web app as part of the binary).
The following refers to a Tomcat 5.5 set up on Windows 2000, so your path names will be different on *nix platforms. In this example, Tomcat 5.5.17 in installed in c:\Program Files\Apache Software Foundation\Tomcat 5.5 (this is my CATALINA_HOME).
Unzip or untar (be careful to use GNU tar) the file containing the administration web app files (eg. apache-tomcat-5.5.17-admin.zip) to a temporary directory, eg. c:\temp.
Copy c:\temp\apache-tomcat-5.5.17\conf\Catalina\localhost\admin.xml to the directory c:\Program Files\Apache Software Foundation\Tomcat 5.5\conf\Catalina\localhost.
Copy the entire directory tree c:\temp\apache-tomcat-5.5.17\server\webapps\admin
to the directory c:\Program Files\Apache Software Foundation\Tomcat 5.5\server\webapps. This is an overlay, so \server\webapps is just pointing you to the \server\webapps, and the admin directory with its contents will be the only thing you see added there.
Add a line to your c:\Program Files\Apache Software Foundation\Tomcat 5.5\conf\tomcat-users.xml file so that you have a user who has admin role. For example, add this line just before the last line (containing </tomcat-users>) of the file:
<user username="admin" password="makesomethingup" roles="admin,manager"/>
- Restart Tomcat.
Now when you visit you should see a page that asks for a user name and password. If you still see the "no longer loaded" error message in your browser, you must either force a full reload of the web page (in Firefox, hold down Shift key while clicking on the Reload button) or just restart your browser completely..
How do I authenticate Manager access via JNDI to Active Directory for multiple Tomcat instances?
ADS insists that the CN of every group be unique, but the Manager app. always uses the group CN=manager. The default can be changed, but it's hard to find and you have to do it over every time you upgrade. Instead, pick an attribute other than the common name -- for example, "description" -- that doesn't have to be unique, name it as the RoleName attribute of the Realm (in server.xml, which you'll be editing anyway), and set that attribute to "manager" in each group you create. Create an OU for each Tomcat instance's groups and give that OU's DN as the RoleBase in that instance's server.xml. Create a uniquely-named group in each instance's OU with the chosen attribute ("description" for example) set to "manager"..
How do I make my web application be the Tomcat default application?
Congratulations. You have created and tested a first web application (traditionally called "mywebapp"), users can access it via the URL "". You are very proud and satisfied. But now, how do you change the setup, so that "mywebapp" gets called when the user enters the URL "" ?
The pages and code of your "mywebapp" application currently reside in (CATALINA_BASE)/webapps/mywebapp/. In a standard Tomcat installation, you will notice that under the same directory (CATALINA_BASE)/webapps/, there is a directory called ROOT (the capitals are important, even under Windows). That is the residence of the current Tomcat default application, the one that is called right now when a user calls up "[:port]". The trick is to put your application in its place.
First stop Tomcat.
Then before you replace the current default application, it may be a good idea to make a copy of it somewhere else.
Then delete everything under the ROOT directory, and move everything that was previously under the (CATALINA_BASE)/webapps/mywebapp/ directory, toward this (CATALINA_BASE)/webapps/ROOT directory. In other words, what was previously .../mywebapp/WEB-INF should now be .../ROOT/WEB-INF (and not .../ROOT/mywebapp/WEB-INF).
Just by doing this, you have already made you webapp into the Tomcat default webapp.
Restart Tomcat and you're done.
Call up "" and enjoy.
Addendum 1 : If you are deploying your application as a war file..
The above instructions relate to the situation where you are "manually" deploying your application as a directory-and-files structure under the /webapps directory. If instead you are using the "war" method to deploy your application, the principle is about the same :
- delete the ROOT directory
- name your war file "ROOT.war" (capitals mandatory)
- drop the ROOT.war file directly in the /webapps directory.
Tomcat will automatically deploy it.
For more information about this topic in general, consult this page : "Configuration Reference / Context"
Addendum 2 : If for some reason you want another method..
If, for some reason, you do not want to deploy your application under the CATALINA_BASE/webapps/ROOT subdirectory, or you do not want to name your war-file "ROOT.war", then read on. But you should first read this : "Configuration Reference / Context" and make sure you understand the implications.
The method described above is the simple method. The two methods below are more complex, and the second one has definite implications on the way you manage and run your Tomcat.
Method 2.1
- Place your war file outside of CATALINA_BASE/webapps (it must be outside to prevent double deployment).
- Place a context file named ROOT.xml in CATALINA_BASE/conf/<engine name>/<host name>. The single <Context> element in this context file MUST have a docBase attribute pointing to the location of your war file. The path element should not be set - it is derived from the name of the .xml file, in this case ROOT.xml. See the Context Container above for details.
Method 2.2
If you really know what you are doing..
- leave your war file in CATALINA_BASE/webapps, under its original name
- turn off autoDeploy and deployOnStartup in your Host element in the server.xml file.
- explicitly define all application Contexts in server.xml, specifying both path and docBase. You must do this, because you have disabled all the Tomcat auto-deploy mechanisms, and Tomcat will not deploy your applications anymore unless it finds their Context in the server.xml.
Note that this last method also implies that in order to make any change to any application, you will have to stop and restart Tomcat.
How do I)
Use a ResourceBundle. See the Java docs for the specifics of how the ResourceBundle class works. Using this method, the properties file must go into the WEB-INF/classes directory or in a jar file contained in the WEB-INF/lib directory.();
How do I share sessions across web apps?
You cannot share sessions directly across web apps, as that would be a violation of the Servlet Specification. There are workarounds, including using a singleton class loaded from the common classloader repository to hold shared information, or putting some of this shared information in a database or another data store. Some of these approaches have been discussed on the tomcat-user mailing list, whose archives you should search for more information.
Sharing sessions across containers for clustering or replication purposes is a different matter altogether.
How can I access members of a custom Realm or Principal?
When you create a custom subclass of RealmBase or GenericPrincipal and attempt to use those classes in your webapp code, you'll probably have problems with ClassCastException. This is because the instance returned by request.getUserPrincipal() is of a class loaded by the server's classloader, and you are trying to access it through you webapp's classloader. While the classes maybe otherwise exactly the same, different (sibling) classloaders makes them different classes.
This assumes you created a MyPrincipal class, and put in Tomcat's server/classes (or lib) directory, as well as in your webapp's webinf/classes (or lib) directory. Normally, you would put custom realm and principal classes in the server directory because they depend on other classes there.
Here's what you would like to do, but it throws ClassCastException:
MyPrincipal p = request.getUserPrincipal(); String emailAddress = p.getEmailAddress();
Here are 4 ways you might get around the classloader boundary:
1) Reflection
Principal p = request.getUserPrincipal(); String emailAddress = p.getClass().getMethod("getEmailAddress", null).invoke(p, null);
2) Move classes to a common classloader
You could put your custom classes in a classloader that is common to both the server and your webapp - e.g., either the "common" or bootstrap classloaders. To do this, however, you would also need to move the classes that your custom classes depend on up to the common classloader, and that seems like a bad idea, because there a many of them and they a core server classes.
3) Common Interfaces
Rather than move the implementing custom classes up, you could define interfaces for your customs classes, and put the interfaces in the common directory. You're code would look like this:
public interface MyPrincipalInterface extends java.security.Principal { public String getEmailAddress(); } public class MyPrincipal implements MyPrincipalInterface { ... public String getEmailAddress() { return emailAddress; } } public class MyServlet implements Servlet { protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { MyPrincipalInterface p = (MyPrincipalInterface)request.getUserPrincipal(); String emailAddress = p.getEmailAddress(); ... }
Notice that this method gives you pretty much the webapp code you wanted in the first place
4) Serializing / Deserializing
You might want to try serializing the response of 'request.getUserPrincipal()' and deserialize it to an instance of [webapp]MyPrincipal.
How do I get direct access to a Tomcat Realm?
Credit: This code is from a post by Yoav Shapira in the user list
Sometimes access directly into the Tomcat realm object is needed; to do, this the following code can be used. Be aware, however, that by using this, your application is relying on a Tomcat extension and is therefore non-standard.
Note that in order for this to work the Context of the web application in question needs to have its privileged attribute set to "true", otherwise web apps do not have access to the Tomcat classes.
Server server = ServerFactory.getServer(); //Note, this assumes the Container is "Catalina" Service service = server.findService("Catalina"); Engine engine = (Engine) service.getContainer(); Host host = (Host) engine.findChild(engine.getDefaultHost()); //Note, this assumes your context is "myContext" Context context = (Context) host.findChild("myContext"); Realm realm = context.getRealm();.
How do I redirect System.out and System.err to my web page?
I have met a situation where I needed to redirect a portion of standard ouput (System.out, STDOUT) and standard error (System.err, STDERR) to my web page instead of a log file. An example of such an application is a compiler research platform that our resarch team is putting online for anybody to be able to quickly compile-test their programs on line. Naturally, the compilers dump some of their stuff to STDERR or STDOUT and they are not web application .jar. Thus, I needed badly these streams related to the compiler output to be redirected to my web editor interface. Having found no easy instructions on how to do that lead me writing up this quick HOWTO. The HOWTO is based on Servlets, but similar arrangements can be done for JSPs. The below example shows the essentials, with most non-essentials removed.
public class WebEditor extends HttpServlet { ... public void doGet ( HttpServletRequest poHTTPRequest, HttpServletResponse poHTTPResponse ) throws IOException, ServletException { poHTTPResponse.setContentType("text/html"); ServletOutputStream out = poHTTPResponse.getOutputStream(); out.println("<html>"); out.println("<body>"); out.println("<head>"); out.println("<title>WebEditor Test $Revision: 1.6 $</title>"); out.println("</head>"); out.println("<body>"); out.println("<h3>WebEditor Test $Revision: 1.6 $</h3>"); out.println("<hr />"); // Backup the streams PrintStream oStdOutBackup = System.out; PrintStream oStdErrBackup = System.err; { // Restore original STDOUT and STDERR System.setOut(oStdOutBackup); System.setErr(oStdErrBackup); } out.println("<hr />"); out.println("</body>"); out.println("</html>"); } }
A few caveats arise, as for instance while the System.out and System.err are redirected as per above, no logging of these is done to files. You will need more legwork to do to make the additional logging. It is important to backup and restore the original streams as the above example does.!
How do I connect to a Websphere MQ (MQ Series) server using JMS and JNDI?
Basically, this works just as described in: Within your application, you are using the standard JNDI and JMS API calls. In web.xml (the container independent application descriptor), you specify resource references (stub resources). And in context.xml (the container specific application descriptor), you are actually configuring the JMS connection.
More to the point. Here's some example code, which might be added to a Servlet. The example is sending a message to an MQ; Context ctx = (Context) new InitialContext().lookup("java:comp/env"); QueueConnectionFactory qcf = (QueueConnectionFactory) ctx.lookup("jms/MyQCF"); QueueConnection qc = qcf.createQueueConnection(); Queue q = (Queue) ctx.lookup("jms/MyQ"); QueueSession qs = qc.createQueueSession(false, Session.AUTO_ACKNOWLEDGE); TextMessage tm = qs.createTextMessage(); tm.setText("Hi, there!"); QueueSender sender = qc.createSender(); sender.send(tm); sender.close(); qs.close(); qc.close();
Note the following:
1. I have intentionally omitted proper resource handling. For example, one ought to ensure that qc.close() is always called by using a try { .. } finally { ..} block.
2. The code contains absolutely no references to com.ibm.mq*.jar.
3. There are only two items, which need configuration: "jms/MyQCF", and "jms/MyQ". We'll find them again in web.xml, and context.xml.
We have now written the code. Additionally, our web application needs the following files, and directories:
+--META-INF | +--- context.xml +--WEB-INF +--- web.xml +--- lib +--- com.ibm.mq.jar +--- com.ibm.mqjms.jar +--- connector.jar +--- dhbcore.jar +--- geronimo-j2ee-management_1.0_spec-1.0.jar +--- geronimo-jms_1.1_spec-1.0.jar
The application descriptor web.xml looks just the same as usual, with the exception of the following lines:
<resource-env-ref> <resource-env-ref-name>jms/MyQCF</resource-env-ref-name> <resource-env-ref-type>javax.jms.QueueConnectionFactory</resource-env-ref-type> </resource-env-ref> <resource-env-ref> <resource-env-ref-name>jms/MyQ</resource-env-ref-name> <resource-env-ref-type>javax.jms.Queue</resource-env-ref-type> </resource-env-ref>
This is simply telling, that the items "jms/MyQCF", and "jms/MyQ" exist, and are instances of QueueConnectionFactory, and Queue, respectively. The actual configuration is in context.xml:
<Resource name="jms/MyQCF" auth="Container" type="com.ibm.mq.jms.MQQueueConnectionFactory" factory="com.ibm.mq.jms.MQQueueConnectionFactoryFactory" description="JMS Queue Connection Factory for sending messages" HOST="<mymqserver>" PORT="1414" CHAN="<mychannel>" TRAN="1" QMGR="<myqueuemanager>"/> <Resource name="jms/MyQ" auth="Container" type="com.ibm.mq.jms.MQQueue" factory="com.ibm.mq.jms.MQQueueFactory" description="JMS Queue for receiving messages from Dialog" QU="<myqueue>"/>
Basically, you just have to enter your values for <myqserver> (the WebSphere MQ servers host name), <mychannel> (the channel name), <myqueuemanager> (the queue manager name), and <myqueue> (the queue name). Both these values, the associated names (HOST, PORT, CHAN, ...), and their collection is truly MQ specific. For example, with ActiveMQ, you typically have a broker URL, and a broker name, rather than HOST, PORT, CHAN, ...
The main thing to know (and the reason why I am writing this, because it took me some hours to find out): How do I know the property names, their meaning, and possible values? Well, there is an excellent manual, called "WebSphere MQ Using Java". It should be easy to find by entering the title into Google. The manual contains a section, called "Administering JMS objects", which describes the objects being configured in JNDI. But the most important part is the subsection on "Properties", which contains all the required details.
How do I use DataSources with Tomcat?
See UsingDataSources
How do I use Hibernate and database connection pooling with Tomcat?
See TomcatHibernate
How do I use DataSourceRealms for authentication and authorization?
See TomcatDataSourceRealms
Troubleshooting
Tomcat crashed! What do I do now?
These steps are in no particular order ...
- Read the Tomcat FAQ
- Read the Tomcat RELEASE NOTES - there is something about Linux in it
- First look at the stack traces. I hope a stack trace was produced before the failure aborted the JVM process. After you get a few stack traces, see if a pattern appears. Trace back to source code if needed.
Patch (or unpatch!) the operating system as needed.
Patch (or unpatch!) the JVM (Java Virtual Machine).
- Linux Problem? - read the RELEASE NOTES!
- Look at commercial vendor support for other servlet engines. Sometimes the problem is universal regardless of servlet engine and may be a JVM/OS/application code issue
- Search Google for web pages - maybe someone else had this problem. I'll bet they did.
- Search Google news groups
- If the JVM is from a commercial vendor, (eg: IBM, HP) check their release notes and news groups
- Using a database? Make sure JDBC type 4 drivers are used. Check their release notes.
- Tweak JVM memory parameters. Setting memory too high can be as bad as having memory too low. If your memory settings are set too high, Java 1.3 JVMs may freeze while waiting for the entire garbage collection to finish. Also if the JVM has too much memory, if may be starving other resources on the machine which are needed which may be causing unforeseen exceptions. In a nutshell, throwing more memory doesn't always solve the problem!
- Turn off the Java JIT compiler. See the Java Docs on how to do this.
I'm encountering classloader problems when using JNI under Tomcat
The important thing to know about using JNI under Tomcat is that one cannot place the native libraries OR their JNI interfaces under the WEB-INF/lib or WEB-INF/classes directories of a web application and expect to be able to reload the webapp without restarting the server. The class that calls System.loadLibrary(String) must be loaded by a classloader that is not affected by reloading the web application itself.
Thus, if you have JNI code that follows the convention of including a static initilaizer like this:
class FooWrapper { static { System.loadLibrary("foo"); } native void doFoo(); }
then both this class and the shared library should be placed in the $CATALINA_HOME/shared/lib directory.
Note that under Windows, you'll also need to make sure that the library is in the java.library.path. Either add %CATALINA_HOME%\shared\lib to your Windows PATH environment variable, or place the DLL files in another location that is currently on the java.library.path. There may be a similar requirement for UNIX based system (I haven't checked), in which case you'd also have to add $CATALINA_HOME/shared/lib to the PATH environment variable. (Note: I'm not the original author of this entry.)
The symptom of this problem that I encountered looked something like this -
java.lang.UnsatisfiedLinkError: Native Library WEB-INF/lib/libfoo.so already loaded in another classloader at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1525)
If the UnsatisfiedLinkError is intermittent, it may be related to Tomcat's default session manager. It restored previous sessions at startup. One of those objects may load the JNI library. Try stopping the Tomcat JVM, deleting the SESSIONS.ser file, then starting Tomcat. You may consider changing the session persistence manager at this time.
Note that Tomcat 6.0.14 the $CATALINA_HOME/shared/lib directory does not exist. You will need to add this and you will need to edit $CATALINA_HOME/conf/catalina.properties so that the shared.loader line looks like this shared.loader=$CATALINA_HOME/shared/lib
How do I debug a Tomcat application?
There is nothing magical about debugging a Tomcat application. All you need is an IDE and two environment variables.
If you have not already done so begin by creating a new Tomcat context for your application. Navigate to TOMCAT_HOME\conf\Catalina\localhost and create a new file, say, myapp.xml. This will become part of your url, so to access your app you'll have to type.
- Enter the following in myapp.xml:
<Context docBase="c:/workspace/myapp/WebRoot" />
This assumes you have a web application containing WEB-INF in c:/workspace/myapp/WebRoot
- Create two environment variables:
C:\>set JPDA_ADDRESS=1044 C:\>set JPDA_TRANSPORT=dt_socket
- Now, you can launch Tomcat with these debug options:
TOMCAT_HOME\bin\>catalina jpda start
- Use your IDE to connect to Tomcat through port 1044
See also: FAQ/Developing
How do I debug a Tomcat application when Tomcat is run as a Windows service ?
You can debug the tomcat service by editing the service parameters as follows.
- Launch a command prompt
- Set the proper CATALINA_HOME environment variable : pointing to tomcat home
- Run the following command:
%CATALINA_HOME%\bin\tomcat6w.exe //ES//tomcat6
- Select the Java tab in the properties dialog box,
- Add the following two lines to the Java Options text box:
-Xdebug -Xrunjdwp:transport=dt_socket,address=127.0.0.1:1044,server=y,suspend=n
If you want to allow remote debugging, replace 127.0.0.1 by your server IP address.
- Click on "Apply" and close the dialog by clicking on "OK"
- Restart the Apache Tomcat service
- Use your IDE to connect to Tomcat through port 1044
For IntelliJ IDEA you choose a remote debug target and set transport to "socket" and mode to "attach" , then you specify the host (127.0.0.1) and port (1044)
See also: FAQ/Developing
How do I check whether Tomcat is UP or DOWN? There is no status command
Unfortunately, the org.apache.catalina.util.ServerInfo class does not determine if Tomcat is UP or DOWN. It is possible to do an HTTP GET on the root url but this is not accurate. In my case I sometimes use a regular Apache HTTPd to display a maintainence message while upgrading, etc. and using that method would give false positives.
The correct way to do determine status is to parse the admin port from server.xml and see if we can connect to it. If we can then the Tomcat is UP otherwise it is DOWN.
Here is my code to do this. Consider it public domain and use it as you see fit. Tomcat makes a note of this connection with something like this on the console.
May 1, 2007 5:10:35 PM org.apache.catalina.core.StandardServer await WARNING: StandardServer.await: Invalid command '' received
Ideally this should be incorporated into org.apache.catalina.util.ServerInfo by some committer. In addition to the shutdown command they should add commands like status (UP or DOWN) and uptime in the await method of org.apache.catalina.core.StandardServer
import java.io.File; import java.io.IOException; import java.io.OutputStream; import java.net.Socket; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.parsers.ParserConfigurationException; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.xml.sax.SAXException; /** * Check to see if Tomcat is UP/DOWN. * * This parses the server.xml file for the Tomcat admin port and see if * we can connect to it. If we can, then the Tomcat is UP otherwise it * is DOWN * * It is invoked as follows: * java -Dcatalina.base=c:/tomcat-6.0.10 CatalinaStatus * * It can also (optionally) shutdown the Tomcat by adding the shutdown * command line parameter as follows: * * java -Dcatalina.base=c:/tomcat-6.0.10 CatalinaStatus shutdown * * @author Shiraz Kanga <skanga at yahoo.com> */ public class CatalinaStatus { /** * Pathname to the server configuration file. */ protected static String Element docEle = configDom.getDocumentElement (); serverPort = Integer.parseInt (docEle.getAttribute ("port")); serverShutdown = docEle.getAttribute ("shutdown"); } /** * Send the shutdown command to the server */ private static void doShutdown (Socket localSocket) { try { OutputStream outStream = localSocket.getOutputStream (); for (int i = 0; i < serverShutdown.length (); i++) outStream.write (serverShutdown.charAt (i)); outStream.flush (); outStream.close (); } catch (IOException e) { System.out.println ("ERROR: I/O Exception during server shutdown."); e.printStackTrace (); } } }
How do I obtain a thread dump of my running webapp ?
You can only get a thread dump of the entire JVM, not just your webapp. This shouldn't be a big deal, but should be made clear: you are getting a dump of all JVM threads, not just those "for your application", whatever that means.
Getting a thread dump depends a lot on your environment. Please choose the section below that matches your environment best. The more universal and convenient options are presented first, while the more difficult ones or those for specific setups are provided later. Generally, you should start at the top of the list and work your way down until you find a technique that works for you.
If you are.
If you are running on *NIX
Send a SIGQUIT to the process. The thread dump will be sent to stdout which is likely to be redirected to CATALINA_BASE/logs/catalina.out.
To send a SIGQUIT, use kill -3 <pid> from the command line.
If you are running. | https://wiki.apache.org/tomcat/HowTo | CC-MAIN-2018-09 | refinedweb | 5,075 | 58.48 |
I have a list of lists composed of tuples representing blocks of military times:
[[(1405, 1525)],[(1405,1455),(1605,1655)],[(1505,1555),(1405,1455),(1305,1355)]]
I have a function to compare the times of two tuples:
def doesOverlap(tuple1, tuple2):
#tuples each represent times for courses
#the 0 index for each tuple is a start time
#the 1 index is an end time
A=int(tuple1[0])
B=int(tuple1[1])
C=int(tuple2[0])
D=int(tuple2[1])
if A<B and B<=C and C<=D:
print(False, (A,B), (B,C))
elif C<D and D<=A and A<B:
print(False, (A,B), (B,C))
else:
print(True, (A,B), (B,C))
I need to compare the times from the nested list such that the first tuple in the first list compares to the first tuple of the second using doesOverlap. If the function returns True, the first tuple of the first list should be compared to the second tuple of the second list, and so on. If the function returns False, I need to compare the first tuple from the third list to the tuples that returned False.
I'm having trouble figuring how to call the function in that order. Any suggestions?
EDIT:
Here's the exact homework problem:
This method is the first of the two greedy scheduling algorithms you will write. With this particular algorithm, you will look through your list of classes and schedule them in order from the class with the least amount of possible meeting times to the class with the most number of possible meeting times. When you find the course (which you haven’t already scheduled) with the least number of listed meeting times, start at the beginning of the list of times for that course and check each value sequentially until you find one which does not conflict with courses you have already scheduled. When you find this time, add it to the schedule and move onto the course with the next lowest number of meeting times. Continue this process until you have attempted to add a time for every course.
Once you are finished, return the resulting list which contains the schedule.
Note that you should not modify the Dictionary which contains all of your courses and meeting times during the course of running this method.
I need to call doesOverlap from a separate function, the function described above.
Did try to understand the problem and try something.
It only prints if there is an overlap of time with successive data points. Also I added assert to ensure that end is always greater than start, else data is erroneous. Made it verbose to just illustrate it well.
dataPoints = [[(1405, 1525)],[(1405,1455),(1605,1655)],[(1505,1555),(1405,1455),(1305,1355)]] # dataPoints[0] # First list # dataPoints[1] # Second list def doesOverlap(input_time, compare_list): for time_el in input_time: start, end = time_el[0], time_el[1] # assert that end is greater than start assert end > start for compare_time_data in compare_list: # assert that end is greater than start start_to_compare, end_to_compare = compare_time_data[0], compare_time_data[1] assert end_to_compare > start_to_compare # After sanitation compare for overlap # Logic: Overlap can only occur if start time less than the end time if we are not woried about AM, PM and Dates if start_to_compare < end: print 'True', time_el, compare_time_data else: print 'False', time_el, compare_time_data doesOverlap(dataPoints[0], dataPoints[1])
Output:
True (1405, 1525) (1405, 1455) False (1405, 1525) (1605, 1655)
I think you are probably being too specific in your question; if you give more details about the problem, we can probably find a better solution that avoids this slightly icky iteration.
That said, I don't think your problem as stated makes sense. If I have understood it correctly, you have three lists (
L1,
L2,
L3 say). You are only interested in the first tuple of
L1, so call that
initial. Then you want to compare
initial to each of the tuples of
L2, recording which of them return
False. Finally, you want to compare the first tuple of the
L3 to the recorded tuples?
I think that's probably not what you meant, because you haven't stated the iteration protocol very clearly. Still, on the off-chance that it actually was, here's some code.
([initial, *_], L2, [final, *_]) = [<redacted>] [doesOverlap(final, b) for b in (a for a in L2 if not doesOverlap(initial, a))]
Well, I don't really want to do your homework for you, but here's my main piece of advice first: write the algorithm in English before you start to implement it. You are getting bogged down in messy details that you don't need to worry about.
Right, code. You first need to sort the list of classes by number of meeting times. Then;
For each class in the list of classes:
For each possible time of the class can be:
If that time overlaps:
Skip it.
If not, add it to the schedule and skip the rest of the times.
I do not understand your comparisions, so I made code which checks the ordering of times and after order of times is established, sees if the time periods overlap by comparing finishing time of first to start time of second. In that case my function returns True, value (instead of just printing out) of periods combined and the overlap time period, in other case my function returns False and the times in corrected order of occuring and beginning time first. I do not know if this is something you want to do. But that is my logic which you can code like I did. Here is sample of my programs output:
Given start: (1300, 1345) Given end: (1525, 1345)
Separate periods (1300, 1345) and (1345, 1525)
Given start: (1405, 1455) Given end: (1545, 1454)
Combined period (1405, 1545), overlap (1454, 1455)
Given start: (1305, 1405) Given end: (1403, 1455)
Combined period (1305, 1455), overlap (1403, 1405)
Given start: (1305, 1455) Given end: (1505, 1555)
Separate periods (1305, 1455) and (1505, 1555) | http://m.dlxedu.com/m/askdetail/3/78b27c9c817b4df8584d607383c7e92c.html | CC-MAIN-2018-22 | refinedweb | 1,007 | 60.48 |
word of warning. This article will look at some technical steps you can take to improve your (legacy) code. But there is often more to it than just technical interventions. A software team must get enough time and opportunities to improve the existing code, without adding new features. For the best results, they need support from management and possibly the end-users.
Another thing I’d like to point out is that with "legacy code" I mean code that feels like it’s a mess, is unreadable, unmaintainable, riddled with bugs, or any combination of these problems.
First, let’s refrain from jumping right in and hacking away. Let’s get an understanding of what it is we want to achieve.
We generally want to improve our code in some way. Examples are improving readability, just getting it under test, improving performance, removing unused code, etc. In many cases, this involves a series of refactoring, not just a single take. You’ll often only see certain improvements after you’ve done others. So, it’s an iterative process.
We’re going to be messing with code that we find difficult to understand. This also means that bugs are easily introduced and things are easily broken. So before we do anything, we should put some effort in writing automated tests.
This will create a safety net for later changes. If we’re already unsure about the existing code, making significant changes will scare us even more. Even if it’s for the better. With a good test suite, we’ll feel more confident to make changes.
If all the tests continue to run successfully, we’re still not 100% sure that we haven’t broken anything. But we have reduced the chances of regression bugs when compared to making changes without automated tests.
Try not to change the code too much for now. We want our tests first, then start making changes. Our tests are there to verify that our changes don’t alter the existing behaviour. But depending on the code you’re working with, adding tests will be easy, hard or near impossible. If code is tightly coupled to other pieces of code, it will be much harder to run it in isolation in a test.
There are several ways to deal with this.
Let me explain the test pyramid first. The test pyramid is traditionally depicted like this:
What it means is that you should have a lot of unit tests, a reasonable amount of integration tests and a few UI or end-to-end tests. The reason is that UI tests and end-to-end tests tend to break easily and be harder to maintain. They also take longer to execute.
Unit tests on the other hand are small, easy to read, easy to change and usually run fast. I’ve experienced lengthy discussions on what a unit test is, what it is not, and how it differs from an integration test. But there’s no real answer. That’s why this is a better way to talk about the test pyramid:
The more components you’re testing in a single test, the higher up the pyramid you are. So you should have more tests that test a single component than tests that test 3 or 4 components together. And you should have even less that test 10 components and again less that test the whole application.
So that’s the theory. In legacy code, things can tend to be a bit different in my experience. One technique I’ve successfully applied is to actually start with end-to-end tests to get the application under test. While the team and I added more end-to-end tests, we started refactoring more, which allowed us to write more unit tests. We worked our way down the pyramid instead of up, so to speak.
Another way to go about this, is to start by adding Dependency Injection. This does mean changing code before you write your tests, but the change is minimal. Take this piece of code for example:
public class AlertService
{
public void AlertForError()
{
var client = new AcmeMailClient();
var address = "admin@example.com";
var body = "Something went wrong...";
client.SendMail(address, body);
}
}
It’s a simple example to explain our case. We have an ASP.NET Controller that accepts POST requests when certain errors occur and then sends out an email to an admin.
We can’t write a unit test for this, because we’re creating a new instance of the AcmeMailClient. If we run this in a test, it will create this instance as well, which will result in a real call to the Acme mail service and we don’t want that in our tests.
Luckily, this is easily fixed in two steps. First, we’ll extract an interface from our AcmeMailClient class:
public interface IMailClient
{
void SendMail(string address, string body);
}
Our AcmeMailClient will have to implement this interface:
public class AcmeMailClient : IMailClient
{
public void SendMail(string address, string body)
{
// send mail here
}
}
The second step is to accept an instance of this interface in the constructor of the class we want to test:
public class AlertService
{
private readonly IMailClient _mailClient;
public AlertService(IMailClient mailClient)
{
_mailClient = mailClient;
}
public void AlertForError()
{
var address = "admin@example.com";
var body = "Something went wrong...";
_mailClient.SendMail(address, body);
}
}
The calling code is now responsible for creating the new instance. For example, in a simple ASP.NET Controller:
[ApiController]
[Route("[controller]")]
public class WebhookController : ControllerBase
{
[HttpPost]
public void Notify()
{
var mailClient = new AcmeMailClient();
var alertService = new AlertService(mailClient);
alertService.AlertForError();
}
}
The code we want to test doesn’t care about the specific implementation or where the instance was created or by whom. This allows us to write a test with a fake implementation. We’ll be using Moq () to create our fake instance, but you could use one of the many other mocking libraries for .NET like Rhino.Mocks (), NSubstitute () or FakeItEasy (). For our tests, I’ve chosen xUnit.net () but again, there are other options like MSTest () or NUnit ().
public class AlertServiceTests
{
[Fact]
public void AlertForError_ShouldSendEmail()
{
var mockMailClient = new Mock<IMailClient>();
var alertService = new AlertService(mockMailClient.Object);
alertService.AlertForError();
mockMailClient.Verify(x => x.SendMail("admin@example.com", "Something went wrong..."));
}
}
Of course, you could say that we’ve just moved the problem elsewhere. Yes and no. Yes, because that is exactly all we did. But no, because it allowed us to write tests for this component. If you repeat this throughout your code, you’ll eventually have moved the constructor calls all the way up the call chain, to (more or less) the beginning of your application. This could mean you end up with this ugly (pseudo)code:
IService a = new A(new B(new C(), new D(), new E(new F(new G))));
But don’t panic. This was already happening before. It’s just very visible now. Now you can add an Inversion of Control container to do all the instantiating for you. Check out NuGet packages like Autofac (), Ninject () or Microsoft.Extensions.DependencyInjection ().
I won’t go into Dependency Injection too deeply, but it will allow you to pull apart tightly coupled pieces of code and put them under test. In your tests you would pass in fake implementations of the relevant interfaces.
But in your production code, this is what it could look like:
IService a = kernel.Get<IService>();
As long as you configured the Inversion of Control container correctly (i.e. defining which class should be used for which interface), this code will give you a new instance of the “A” class with all dependencies.
The important thing to keep in mind, is that we’re adding this so that we can get our code under test. Adding Dependency Injection makes it easier to write unit tests.
If you have access to developers that have worked with the code in the past, talk to them. They are an invaluable resource of knowledge. They might know the ins and outs of the project, the weird pieces of code and why they are the way they are. They may be able to tell you what to look out for, what the pitfalls of the project are. They’ll tell you things they’ve tried that didn’t work and things that did work. Even if they can’t tell you everything, they should be able to tell you something at least.
Make sure you don’t insult them. They wrote the code to the best of their ability, with the constraints that were present at the time (knowledge, time, budget, etc). You’re not there to judge. You just want to extract as much knowledge about the code as you can.
Another person (or group of persons) you will have to talk to is management. Both your direct managers or team leads as well as upper management. They have to know you’re going to invest time and effort into this and that things may break along the way (but often they’re breaking constantly anyway). They have to support you in this or the whole journey will be very frustrating.
Legacy code is notoriously unstable. Things are often breaking, and teams find themselves spending too much time fixing bugs and keeping the system up and running.
That is why you should add logging, monitoring and alerts to your application.
When you add logging, be sure that it makes sense. I recommend using a library that supports structured logging. Serilog () introduced structured logging to .NET but most popular logging frameworks should support it by now.
In many legacy applications, we log simple pieces of text:
_logger.Info($"Received HTTP code {response.StatusCode} after {response.Elapsed} ms");
This would give us the following text in our log:
"Received HTTP code 200 after 104 ms"
If you’re logging to a text file, that’s basically the best you can do. But if you’re logging to a cloud platform for example, you can use structured logging to improve your search capabilities. For this, we’ll use slightly different code:
_logger.Info("Received HTTP code {code} after {time} ms", response.StatusCode, response.Elapsed);
The above piece of code uses NLog () and will log the same piece of text, but in platforms that support it, it will include a JSON document:
{
"code": 200,
"time": 104
}
A good logging platform will then allow you to easily search all log lines where the code equals 200 and the time is greater than 100 for example. This is a lot trickier with simple lines of text. Structured logging will even serialize entire objects, giving you this for example:
{
"code": 200,
"time": 104,
"request": {
"url": "",
"method": "POST",
"body": {
"foo": "bar"
}
}
}
Combined with decent log messages, structured logging allows you to quickly find the relevant log lines when something went wrong, and then understand why it went wrong. The log messages should tell you what the application was doing. The properties you added to the log event will tell you what the relevant data looked like at the time.
If it’s not yet possible to add structured logging, at least make sure you have decent logging that is useful for you. Add the necessary data to your log messages so that you can debug any issues that occurred.
Legacy code is typically prone to obscure bugs and being able to find and fix them quickly greatly reduces frustration.
Any modern system should be monitored, to see if it is running well. Our legacy systems are often no longer "modern" so all the more reason to monitor them closely. A modern monitoring solution will allow you to automatically keep an eye on several parameters that interest you:
You should be able to see the current state of the application easily, usually in some dashboard with graphs.
Once you set up monitoring, you can automate even further. With alerts, you can automatically be notified if things are going wrong. For example, you could get an email when your API is returning too many HTTP 500 status codes. You can then check logs to see what is going wrong.
In some cases, you might be able to fix it before a customer or end-user even notices! A good monitoring solution keeps an eye on your application and lets you know if something is out of the ordinary. This allows you to focus on other important tasks, like adding new features and improving the existing code.
Just like your tests, monitoring and alerting is a safety net. A bug might still have made it through all your automated and manual tests. Monitoring is another way of catching issues as early as possible.
With all your safety nets in play, and your team now enthusiastically refactoring the legacy code, it’s interesting to track your progress. While strictly not necessary to refactor a legacy project and get it into a better state, it is useful to see how you’re doing and where you should be focussing your efforts.
There are many metrics out there that you can use. Here are some useful ones.
Code coverage indicates how much of your code is covered by tests. People tend to focus on the target number, for example: "we need 80% code coverage." While that is good, that makes little sense in a legacy application that has little to no tests. What I propose then, is to start adding tests and move the target up as you go. For example, once you have 17% code coverage, set the minimum percentage to 15. Once you have 22, set it to 20. This way, you avoid dropping back below a certain threshold and you move the threshold up every time it’s possible. After a while, you can reach a point that you’re happy with.
But apart from the specific number, code coverage is also handy to see which parts of the code you’ve covered already. Before I start refactoring a piece of code, I check to see if it’s covered by tests. If it isn’t, I need to write some tests first. Once I have the functionality covered, only then do I start changing the code.
A good code coverage tool should be able to show you your source code files with an indication of what is covered by tests and what isn’t. For our simple example, I’ve used Coverlet () to generate the code coverage results and ReportGenerator () to generate a report.
As always, there are alternatives if you want. Visual Studio Enterprise has code coverage built in. Other options are NCover (), OpenCover () or dotCover ().
I added Coverlet as a NuGet package to my unit test assembly. Then I installed ReportGenerator as a global .NET tool:
dotnet tool install -g dotnet-reportgenerator-globaltool
With that, I can run my tests from the command-line and collect the code coverage results:
dotnet test --collect:"XPlat Code Coverage"
Then I can generate the report:
reportgenerator -reports:".\MyApp.Tests\TestResults\50054626-428d-4c57-86a7-045e118c2946\coverage.cobertura.xml\coverage.cobertura.xml" -targetDir:" .\MyApp.Tests\TestResults\50054626-428d-4c57-86a7-045e118c2946"
The final report looks like this:
You can go into separate files and even see the lines that were covered:
Static code analysis will analyse your source code and give you a report of potential quality issues. Many of these tools will even help you fix the issue.
Possible solutions are Microsoft’s FxCop (), ReSharper (), SonarQube (), and NDepend ().
Of these, NDepend is the most complete for .NET projects. It provides a vast array of metrics to measure the quality of your code. However, it isn’t free which can be a show-stopper for some teams and organizations.
Another cause and manifestation of legacy code is lack of team conventions. If you don’t have them already, you really should get together with the team and agree on some things.
It’s very jarring to navigate through code that is using different conventions throughout. Things like naming (variables, methods, classes, projects, etc) and indentation come to mind, but even architectural choices can be agreed upon.
Certain conventions can be enforced by tools like StyleCop for .NET () or ESLint for JavaScript and TypeScript (). By automating the enforcement, you can avoid pointless discussions or awkward code reviews where it feels like you’re nit-picking.
Conventions can evolve of course. When someone makes a good case, or the team decides a certain convention is no longer something they want to follow, then it’s OK to make the change. If conventions aren’t allowed to change, the code will start to feel like legacy code even more.
Tackling legacy code is a team effort. So you’ll have to make sure everyone is on board. This means everyone should be in favour of fixing the issues the code has, but also that the team should agree on how they will collaborate.
How will we react to bugs? How will we deploy our software? What release schedule will we follow? There are many questions that need to be answered.
I recommend automating deployments as much as possible. This includes configuration changes, database changes, etc. This means you’ll have to put all this in source control, which only has benefits.
Once you’ve automated your deployments, you can start releasing very regularly. I would aim for weekly releases at the minimum. Releasing faster makes it easier to pinpoint a bug. Because you haven’t made a myriad of changes, the bug can only be found in a few places. If you do quarterly big-bang releases, there have been so many changes that it will be harder to find which change caused the bug. The time between the code change and the release has also been longer, making it difficult to remember your reasoning when you were changing the code.
It can certainly feel like a legacy project is unsalvageable and things are only getting worse. But it is possible to change course. It will take time and work, and what I’ve covered may seem like a lot of work. However, if you do it step by step, using the tips I laid out here, you should be able to improve the situation you’re so frustrated with now. It certainly beats doing nothing about! | https://www.dotnetcurry.com/patterns-practices/tackling-legacy-code-tips | CC-MAIN-2022-05 | refinedweb | 3,058 | 73.17 |
Checking for hitTestsDeathAngle Jun 12, 2014 10:48 AM
Its me again with my daily questions
So I am trying to add a scoring system to my game. I have these coins that are added to the stage using code (the class name for them is Coin) and my character has an instance name of "player"
So my question is how do I check for hitTests between the player and the coins?
1. Re: Checking for hitTestsNed Murphy Jun 12, 2014 12:20 PM (in response to DeathAngle)
It depends on which version of Actionscript you are using. If it's AS2 it could be something like...
if(player(hitTest(coin)){...
if it is AS3 it could be something like...
if(player(hitTestObject(coin)){...
Have you tried using Google as a reference tool for your daily questions? If you were to search using terms like "AS# hitTest tutorial" (replace # with 2 or 3) you'd probably find all you need.
2. Re: Checking for hitTestsDeathAngle Jun 12, 2014 12:29 PM (in response to Ned Murphy)
When i do
if(player.hitTestObject(Coin)) {
trace("Hello");
}
I get error 1067 "Implicit coercion of a value of type Class to an unrelated type flash.display:DisplayObject."
Now what the hell that means beats me. I am trying to find the answer on google right now but i also posted on this forum just to see which is quicker, so far no luck with google
Again "player" is the instance name for my main character and Coin is the class/the Coin.as file that is suppose to dynamically add the coins to the game
3. Re: Checking for hitTestsNed Murphy Jun 12, 2014 12:32 PM (in response to DeathAngle)1 person found this helpful
If Coin is the class name then the error should be clear for you to understand. You use instance names (or references to the instances) to target objects in code, not their class names.
4. Re: Checking for hitTestsDeathAngle Jun 12, 2014 12:59 PM (in response to Ned Murphy)
I realize that but im not sure how to give it an instance name because it isint actually on the stage until it gets placed by code
here is my Coin.as file
package {
import flash.display.MovieClip;
import flash.events.Event;
public class Coin extends MovieClip {
public function Coin(xLocation:int, yLocation:int) {
// constructor code
x = xLocation;
y = yLocation;
}
public function removeSelf():void {
trace("remove coin");
this.parent.removeChild(this);
}
}
}
And then on the timeline in the actions layer I have the coordinates to place the coin, it gets placed correctly I just don't know how to make it registor for hits (AS3)
5. Re: Checking for hitTestsNed Murphy Jun 12, 2014 1:46 PM (in response to DeathAngle)
Where do you plan to do the hitTest,in your main file? If so, then the code you'll need to use will depend on how you add the coins. Usually if there will be a number of them then you would fill an array with references to them and loop thru the array to check for hitTests. That might be something like the following...
var coinArray = new Array(); // somewhere more global in your main code
// then in the code where you add a new Coin instance
var coin:Coin = new Coin();
coinArray.push(coin);
then your hitTests would use the array...
for(var i:int=0; i<coinArray.length; i++){
if(player.hitTestObject(coinArray[i]){...
etc....
}
6. Re: Checking for hitTestsDeathAngle Jun 12, 2014 2:35 PM (in response to Ned Murphy)
Thankyou very much, it works perfect now, I have almost 500 lines of code now so keeping it all organized is starting to be a pain in the ***
7. Re: Checking for hitTestsNed Murphy Jun 12, 2014 2:59 PM (in response to DeathAngle)
You're welcome | https://forums.adobe.com/thread/1495297 | CC-MAIN-2018-13 | refinedweb | 644 | 67.79 |
I’ve been meaning to write something up on this for quite a while. It recently struck me that there still wasn’t a whole lot of good material on this out there already. So I figured I’d throw something together.
We’ll start by looking at the basic mechanics of the Sound object, how to code it up, and create some random noise. Later, we’ll start generating some real wave forms and start mixing them together, etc.
Diving right in
Flash 10 has the ability to synthesize sounds. Actually, there was a hack that could be used in Flash 9 to do the same thing, but it became standardized in 10.
Here’s how it works. You create a new Sound object and add an event listener for the SAMPLE_DATA event (SampleDataEvent.SAMPLE_DATA). This event will fire when there is no more sound data for the Sound to play. Then you start the sound playing.
var sound:Sound = new Sound(); sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData); sound.play();
At this point, because you have not loaded any actual sound, such as an MP3, WAV, etc. or attached it to any streaming sound data, there is nothing to play and the SAMPLE_DATA event will fire right away. So we’ll need that handler function:
function onSampleData(event:SampleDataEvent):void { }
Our goal here is to give the Sound object some more sound data to play. So how do we do that? Well, the SampleDataEvent that gets passed to this function has a data property, which is a ByteArray. We need to fill that ByteArray with some values that represent some sound to play. We do that using the ByteArray.writeFloat method. Generally you want to write values from –1.0 to 1.0 in there. Each float value you write in there is known as a sample. Hence the “SampleDataEvent”. How many samples should you write? Generally between 2048 and 8192.
OK, that’s a big range of values. What’s best? Well, if you stick to a low number like 2048, the Sound will rip through those values very quickly and another SAMPLE_DATA event will fire very quickly, requiring you to fill it up again. If you use a larger number like 8192, the Sound will take 4 times as long to work through those values and thus you’ll be running your event handler function 4 times less often.
So more samples can mean better performance. However, if you have dynamically generated sounds, more samples means more latency. Latency is the time between some change in the UI or program and when that results in a change in the actual sound heard. For example, say you want to change from a 400 hz tone to a 800 hz tone when a user presses a button. The user presses the button, but the Sound has 8000 samples of this 400 hz tone in the buffer, and will continue to play them until they are gone. Only then will it call the SAMPLE_DATA event handler and ask for more data. This is the only point where you can change the tone to 800 hz. Thus, the user may notice a slight lag between when he pressed the button and when the tone changed. If you use smaller numbers of samples – 2048 – the latency or lag will be shorter and less noticeable.
For now, let’s just generate some noise. We’ll write 2048 samples of random values from –1.0 to 1.0. One thing you need to know first is that you’ll actually be writing twice as many floats. For each sample you need to write a value for the left channel and a value for the right channel. Here’s the whole program:
import flash.media.Sound;
import flash.events.SampleDataEvent;
var sound:Sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
sound.play();
function onSampleData(event:SampleDataEvent):void
{
for(var i:int = 0; i < 2048; i++) { var sample:Number = Math.random() * 2.0 - 1.0; // -1 to 1 event.data.writeFloat(sample); // left event.data.writeFloat(sample); // right } }[/as3]
If you run that, you should hear some fuzzy static like a radio tuned between stations. Note that we are generating a single sample and using that same value for left and right. Because both channels have exactly the same value for each sample, we’ve generated monophonic sound. If we want stereo noise, we could do something like this: event.data.writeFloat(sampleB); // right } }[/as3]
Here we are writing a different random value for each channel, each sample. Running this, especially using headphones, you should notice a bit more “space” in the noise. It’s subtle and may be hard to discern between runs of the program, so let’s alter it so we can switch quickly.
import flash.media.Sound;
import flash.events.SampleDataEvent;
import flash.events.MouseEvent;
var sound:Sound = new Sound();
sound.addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleData);
sound.play();
var mono:Boolean = true;
stage.addEventListener(MouseEvent.CLICK, onClick);
function onClick(event:MouseEvent):void
{
mono = !mono;
} if(mono) { event.data.writeFloat(sampleA); // left again } else { event.data.writeFloat(sampleB); // right } } }[/as3]
Here we have a Boolean variable, mono, that toggles true/false on a mouse click. If true, we write sampleA to the left and right channels. If mono is not true, then we write sampleA to the left channel and sampleB to the right channel. Run this and click the mouse. Again, the change is subtle but you should be able to notice it.
To see, or rather, to hear, the results of latency, change the 2048 in the for loop to 8192. Now when you click, you’ll notice a significant delay in the time between the click and the change from mono to stereo or vice versa.
One other note about the number of samples. I said, “generally” to use between 2048 and 8192. The fact is if you try to use more than 8192, you’ll get a run time error saying one of the parameters is invalid. so 8192 is a pretty hard limit. You can use less than 2048, but if you do, what happens is that the Sound object will work through those samples and then consider the sound is complete. It will not generate another SAMPLE_DATA event when it is done. Instead, it will generate a COMPLETE event. So if you want the sound to keep playing, you need to keep it supplied with at least 2048 samples at all times.
In the next installment, we’ll start creating some simple waves.
[…] interface. So yes, realtime sound synthesis can be done and it’s actually quite easy to get started. It takes just a few lines of code to be able to write data directly to your soundcard and when you […]
Nice, can’t wait till the next one!
One thing, maybe it’s just my mac but when i copy/paste your code to the the Flash code editor, it retains the line numbers, that’s super annoying. Any way around this?
Maybe you could use this, they supposedly have a wordpress plugin.
[…] Sound Synthesis in AS3 Part I – The Basics, Noise […]
click on the gray bar that says “Plain Text”. Then, select, copy, paste.
I didn’t realize you could change the number of samples (or why you might want to).. I thought you had to use 8192. Good to Know 🙂
[…] Sound Synthesis in AS3 Part I – The Basics, Noise […]
Top quality content, thanks for sharing.
[…] Sound Synthesis in AS3 Part I – The Basics, Noise | BIT-101 Blog […]
fantasy! I can’t wait to translate into Chinese!
[…] just one of the many advantages of the Flash Platfomr.Cool Flash Libraries:HypeFrameworkSonoport Bit-101 sound synthesisCool JavaScript Libraries:PhoneGap Lawnchair Processing […]
[…] First off, you need to read up on Keith Peter’s blog posts on sound. This will give you a very good introduction on sound in general and get you going on some simple, yet very useful code examples. Here is a link to the first part of four, read them all. […]
[…] Class – AS3 Sound Synthesis Part I Part II Part III Part […]
[…] Class – AS3 Sound Synthesis Part I Part II Part III Part […]
Very thanks for you, this is a very good article! support you!
[…] Part I […]
[…] Class – AS3 Sound SynthesisPart I II III […]
[…] Class – AS3 Sound Synthesis Part I Part II Part III Part […] | http://www.bit-101.com/blog/?p=2660 | CC-MAIN-2017-17 | refinedweb | 1,400 | 75.2 |
Find many duplicate rules in memory by using iptables_manager
Bug Description
I installed VPNaas In my devstack. I find many duplicate iptables rules in memory. The rule is ' 2015-04-23 10:55:15.380 ERROR neutron.
You've reported this as a private security vulnerability, which implies that you believe it represents an exploitable condition in the software. Please clarify the way in which you would expect a malicious party to take advantage of this bug.
Please provide a part of iptables output showing duplicate rules
[Expired for neutron because there has been no activity for 60 days.]
2018-08-27 10:07:32.989 3258 INFO neutron.
2018-08-27 10:07:32.990 3258 INFO neutron.
2018-08-27 10:07:32.990 3258 INFO neutron.
2018-08-27 10:07:32.990 3258 INFO neutron.
2018-08-27 10:07:32.990 3258 INFO neutron.
2018-08-27 10:07:32.991 3258 INFO neutron.
def _weed_out_
# remove any rules or chains from the filter that were slated
# for removal
if line.startswith
chain = line[1:]
if chain in table.remove_
return False
else:
if line in table.remove_rules:
# Leave it alone
return True
You can see that when you get the iptables rule name in the code “line[1:]”,
there is a count after the chain name, and the count value changes,
which invalidates the judgment
There is append a IptablesRule instance into"self.rules" when I add a iptables rule into memory in iptables_
manager. py. If memory has already exists this rule? Does the iptables_manager weed out it? The code writes "for rule in rules" in _modify_rules function. Why does check the rules exists in memory first? | https://bugs.launchpad.net/neutron/+bug/1447651 | CC-MAIN-2019-04 | refinedweb | 284 | 68.77 |
Asked by:
ConfigMgr 1602 error 0x87D00666(-2016410010) while installing Update
Dear Folks,
As i have recently upgraded to Config Mgr 1602 and it went very smooth upgrade. But after a few days found annoying issue regarding updates. PC's esp with Windows 10 are not installing updates. Every time trying to manual install it prompts this error as shown in snapshot
As per error description,
But i have not configured any service windows on any collection. Then which service window is preventing from updates to instal. Furthermore, no error found in any logs file. Almost gone through all files.
I hope raised concern is clear. Waiting for your help and suggestion
Thank You
REGARDS DANISH DANIE
- Edited by Danie Danish Thursday, June 23, 2016 4:17 AM
Question
All replies
- Never seen that happening. What does the deployment Status on the monitoring node tell?
Torsten Meringer |
Dear Sir,
Does this issue happen to all deployments of updates no matter what the targeted collection is? Any information in ServiceWindowManager.log indicating the service windows?
Best regards
Frank
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com
Torsten,
In Deployment stat it shows downloaded updates.
Frank,
This happened only in Windows 10 updates deployment and one Windows 7 PC is identified. In ServiceWindowManager there is no any information related to the service window.
REGARDS DANISH DANIE
What are the business hours set to on the client?
Does the client have the option selected in Software Center on the Options page under Computer Maintenance that says "Automatically install or uninstall required software and restart the computer only outside of specified business hours"?
Jason | | @jasonsandys
No business hours are set to on the clients.
Its being very strange for me too, neither showing any proper error in logs nor its deploying updates though deadline time is passed.
Created a new deployment and deployed again, reinstalled the sccm client but no success.
Opened the case at Microsoft Support, let see what's figured out.
REGARDS DANISH DANIE
- Proposed as answer by Frank DongMicrosoft contingent staff, Moderator Sunday, July 03, 2016 7:40 AM
- Marked as answer by Frank DongMicrosoft contingent staff, Moderator Thursday, July 07, 2016 11:27 AM
- Unmarked as answer by Danie Danish Friday, July 15, 2016 4:34 AM
- Unproposed as answer by Danie Danish Friday, July 15, 2016 4:34 AM
Hi Danie,
I am having the same issue as you. It just started either at the end of last week around June 23 as I have machine built before that patched and good to go.
If you get an answer from Microsoft, I would appreciate it if you could share it. I will do the same.
Thanks,
Paul
Sorry guys for bit a late.
As i was not having any premier support of microsoft and due to this support person didnt gave me any Root Cause Analysis. But issue is resolved now
What we did we targeted single workstation and removed its membership from all collections except one and deplyed package on it and installation went successful.
Strange but true :)
Well, now will check one by one collection at a time to see which collection and client settings was the reason behind, but till now i have not any root cause analysis behind this issue.
Hopeful that you will also find this solution to the issue
Thank You
REGARDS DANISH DANIE
- Edited by Danie Danish Friday, July 15, 2016 4:41 AM
- I also went through each collection until I found the problem one. The one that I found had no client settings attached and was actually created not very long ago compared to other collections. It did not have any properties or settings unique to it from other collections and did not otherwise look like it would have anything "wrong" with it. I ended up deleting that collection and recreating it and the deployment that was showing the error on my devices started to install immediately without issue. I have no idea why that collection was causing the problem
Hi all
I just encountered this problem with my SCCM 1606.
the problem came with the option "all devices are part of the same server group" that was enabled on some collections
Microsoft told me that this feature is broken for now but might be usable with 1610 version.
you can use the query : Gwmi –namespace root\SMS\SITE_<Code Site> -class SMS_Collection [ Where-Object {$_.UseCluster –eq $true}
to identify collections that have this option enabled then disable all.
once they are all unchecked, update computer policies and SU will automagically work again :)
- Proposed as answer by ConfigurationMatt Friday, September 09, 2016 5:12 PM
Thank you for posting this. I've been beating my head against the wall the past couple of days with this same issue. I had really hoped to be able to assign a maintenance order with 1606.
Also, for anyone using the above excellent script. the left bracket should be a pipe.
... SMS_Collection | Where-Object...
Hi,
CM 1606 here. I'm seeing this also - I have a simple 'All Deployments' MW set 10pm - 5am daily, which I can see in Client Center ok. Client settings are default - i.e. default Business Hours and Maintenance setting. I connected to an affected system using Client Center and selected an update to install during BHs (i.e. outside of the MW and non-BH) and it fails with 87D00666 / No service window.
It was my understanding that if a user wants to install the updates during their normal day, they can do this from Software Center and it shouldn't matter if a MW/BH are set since it's user-initiated. Am I wrong?
Thanks
This got us going in the right direction, but the query (even after correcting the bracket issue) didn't return any results. Our PFE provided this SQL query which found 2 collections that had cluster aware patching enabled after we did the upgrade to 1602. Unchecked the box and forced a policy refresh on an affected client and then was able to install updates from Software Center.
Here is the SQL query:
select collections_g.SiteID from CEP_CollectionExtendedProperties join collections_g on CEP_CollectionExtendedProperties.collectionID=collections_g.collectionid where UseCluster = 1
Figured it might help someone else who stumbles across this one from Google.
- Proposed as answer by Will Nimmo Wednesday, September 21, 2016 6:08 PM
Thank you! This resolved the issue for me.
The SQL query identified one collection in my site that had the "All devices are part of the same server group" checkbox selected. After I cleared the checkbox and ran Machine Policy on the affected clients, they were able to install updates from Software Center.
After I copied and pasted the query, it contained one small typo that I needed to fix before it could run successfully. A space was missing between "on" and "CEP_CollectionExtendedProperties", perhaps due to the carriage return at that point in the text above. With that space inserted, the query succeeded.
Thanks again!
- Thank you for this query. I was able to find the one collection that had that setting turned on. Once I de-selected it, the affected clients came to life and installed the deployed software updates. The interesting part is that applications that were deployed to these system installed without issue. Only Software Updates were affected.
Big thanks for this query!! :)
This "Cluster Aware" feature was accidently set in one of our 3000+ collections and "disabled" Software updates installation on 30% of our clients... so on some machines the updates did work and those which got targeted by this collection, it was "blocked". | https://social.technet.microsoft.com/Forums/en-US/86783d86-0e38-4cb4-acf8-6110acc76c0e/configmgr-1602-error-0x87d006662016410010-while-installing-update?forum=configmanagersecurity | CC-MAIN-2018-17 | refinedweb | 1,292 | 62.68 |
This topic explains the architecture and different components of the editing user interface in Optimizely. The user interface is pluggable allowing you to plug-in your own gadgets to the panes to extend editing possibilities.
- How it works
- Architecture
- Server communication
- Editable blocks
- Editing data attributes
- Editing data events
- Refreshing OPE overlays
- Editing context on the client
- Architecture diagram
How it works
The UI framework has a server-side part that is based on ASP.NET MVC and a client-side part that uses the JavaScript library Dojo. The UI also has context-awareness where it is possible to have a component automatically reload when the context is changed, or to display customized content for that item. You also can load Web Forms pages and user controls into components of the user interface to port existing plug-ins built with ASP.NET user interface.
You can edit content in the user interface in the following ways:
- On-page editing. Opens an editor directly on the page to edit simple property types such as short strings, integers and decimal numbers.
- Forms editing. Edit all properties in a form-based editing view. Here you are able to edit properties that are not visible on the page.
This means that the editor can make changes using different modes without losing context.
Autosaving
Changes are automatically saved to the client state and sent to the server to be persisted. To reduce the burden on the server, synchronization between client and server occurs after a few seconds delay. When a user edits a property or content block, the client creates and stores an undo step in a history that lets the user undo and redo changes. The undo and redo steps are available while the user remains editing the page and are lost when the user leaves the page. Any editing changes are sent to the server and are saved even if the user leaves the page or closes the browser.
Architecture
Editing components are organized into two layers.
- UI layer. Most of the Optimizely scripts and CSS files are loaded and most of the user interface and interaction takes place on this layer.
- Content page layer. This layer is inside an iframe and should have very few differences compared to the page that is shown to visitors of the site. This layer contains HTML5 data attributes that are injected in edit view to identify editable blocks. Also, a custom stylesheet is injected into the content pages when editing is enabled, but no other scripts or markup are injected; however, you can choose to have the epi object injected (see Loading the "epi" communication object). When the content page layer is loaded, the UI layer scans the content of the page to find editable nodes and adds event handlers to them that trigger loading of the correct editor.
You can enable editing for a specific area by adding the following attribute to the HTML tag data-epi-property-name="PageName":
<h1 data-<%= CurrentPage.PageName %></h1>
Clicking on a node in the following example loads whichever editor is configured for the Pagename property.
A property can appear several times on a page, so you can edit it in multiple places. By editing a property in one place, it updates any other place where the property is used. You can prevent a property from being edited by disabling it, (but the property updates its content if it is edited elsewhere).
Configuration of editors is done separately and does not have to be added to the HTML even if there are a few other data-attributes that you can use to override default behavior.
When an editor is triggered for a property, an editor factory class creates the editor based on the data attributes and metadata for the property. Optimizely Framework extracts metadata from the following sources to create an editor.
- Extracting metadata attributes from the model class
- Global editor descriptors that can set up rules for a common language runtime type.
- Custom metadata providers that supplies the metadata.
Note: For a description of the Optimizely object editing system, see Editing objects.
You can create page properties from typed model classes and in the administrative user interface. The PageData class assembles metadata using a custom metadata provider that sends the metadata to the client using a RESTful service and is a hierarchical representation of the page and its properties. The following example shows how a property might be described:
- modelType is the name of the CLR type including its namespace.
- uiType is the default client side editor widget.
- customEditorSettings might have information about a non-standard editor type like an inline editor.
- settings is a settings object that will be passed to the constructor of the editor widget.
{ "name" : "PageName", "modelType" : "EPiServer.Core.PropertyString", "uiType" : "dijit.form.ValidationTextBox", "uiPackage" : null, "customEditorSettings" : { "uiWrapperType" : "inline", "uiType" : "epi.cms.widget.ValidationTextBox"}, "layoutType" : null, "defaultValue" : null, "displayName" : "Name", "groupName" : null, "displayOrder" : -81, "settings" : { "label" : "Name", "required" : true, "missingMessage" : "The Name field is required.", "invalidMessage" : "The field Name must be a string with a maximum length of 255.", "regEx" : "^.{0,255}$", "maxLength" : 255 }, "additionalValues" : {}, "properties" : [], "groups" : null,' "selections" : null }
Server communication
The page editing component does the following tasks:
- Creates editable blocks for each node in the page with data attributes.
- Stores page data in the page model.
- Syncs changes using the page model server sync class.
The editable blocks are responsible for editing property data and use a render manager to render property on the page (on either the client side or server side). Server-side rendering uses a queue so that properties are not rendered at the same time. Changing one property can mean that several properties on the page need to be rendered again with different settings.
Editable blocks
Clicking on an editable block creates the editor and an editor wrapper for the block. The editor factory decides which wrapper and editor to use depending on the data attributes for editing and metadata for the property. You can connect to an event in editor factory to change the values at runtime.
Data attributes have higher precedence than the metadata. You can choose to use an editor for a property on a page which is not the standard editor for that property. For inline editing, you must use an editor designed for it; in other cases, the editor is a widget.
In MVC, you can pass the values in as anonymous object to PropertyFor helper method. You should pass in the render settings using parameter additionalViewData while you can pass in the editor settings using parameter editorSettings.
These are the RenderSettings for the PropertyFor anonymous object:
An example of customizing the rendering of string would be:
<p>This article originated in @Html.PropertyFor(x => x.CurrentPage.CountryOfOrigin, new { CustomTag = "span"}).</p>
Editing data attributes for Blocks in Content Areas
When rendering a content area each block needs to have the data-epi-block-id attribute on it for the block to be editable in the content area.
Editing data events
When a property value has been saved, a message is published on the topic "contentSaved", together with an object containing the content link to the content that was just updated. You can listen to this event with:
// Wait for the scripts to run so we can use the 'epi' global object. window.addEventListener("load", function() { epi.subscribe("contentSaved", function(event) { console.log("On Page Edited!"); console.log(event.contentLink); }); });
The event will look like this.
{ "contentLink": "6_164", }
Refreshing OPE overlays
If the DOM changes so that there are new elements with data-epi-property-name, or existing elements change their value to another property name, then the overlays automatically update themselves.
Editing context on the client
It may be necessary to know the current editing context on the client. The epi communication object now contains following properties:
- inEditMode
True in both edit mode and preview mode.
- isEditable
True in edit mode and false in preview mode.
- ready
True if the property isEditable has been initialized. Otherwise, subscribe to epiReady to get those properties as soon as they are available.()); } });
To get the "epi" object you need to have the attribute [RequireClientResources] on your controller, unless you're already inheriting from PageController or ContentController.
Then you’ll need to require the resources in your razor view where you include any other scripts: @Html.RequiredClientResources("Footer")
Architecture diagram
Last updated: Jul 02, 2021 | https://world.optimizely.com/documentation/developer-guides/archive/-net-core-preview/CMS/editing/ | CC-MAIN-2021-43 | refinedweb | 1,400 | 53.92 |
Home Skip to content Skip to navigation Skip to footer
Cisco.com Worldwide Home
Document ID: 115947
Updated: Sep 27, 2013
Contributed by Cisco TAC Engineers.
This document describes the advanced functions of Dynamic Access Policies (DAP) for remote access VPNs. You can use these advanced functions when you need additional flexibility to match by criteria.
Note: Refer to Important Information on Debug Commands before you use debug commands.
Cisco recommends that you have knowledge of these topics:
This document is not restricted to specific software and hardware versions, but the Adaptive Security Device Manager (ASDM) is required in order to complete the configuration.
The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command.
Caution: Use DAP advanced custom Lua functions only if the ASDM GUI configuration or the EVAL function does not provide the matching behavior you need. In production deployments, use advanced Lua functions with extreme care and with guidance from Cisco Engineering/Technical Assistance Center (TAC) in order to avoid any unintended behavior with DAP.
If you use DAP for remote access VPNs, you may need the additional flexibility to match by criteria. For example, you can apply a different DAP based on these scenarios:
The Lightweight Directory Access Protocol (LDAP) server can return many attributes that DAP can use in a logical expression.
For an example of these attributes, use the debug dap trace command on the Adaptive Security Appliance (ASA) console.
assert(function() if ( (type(aaa.ldap.distinguishedName) == "string") and (string.find(aaa.ldap.distinguishedName, "OU=Admins,dc=cisco,dc=com$") ~= nil) ) then return true end return falseend
A distinguished name (DN) of the user is one attribute returned by the LDAP server. The DN implicitly identifies where the user object is located in the directory. For example, if the DN is CN=Joe User,OU=Admins,dc=cisco,dc=com, this user is located in OU=Admins,dc=cisco,dc=com. If all administrators are in this OU (or any container below this level), use this logical expression in order to match on the criteria:
assert(function() if ( (type(aaa.ldap.distinguishedName) == "string") and (string.find(aaa.ldap.distinguishedName, "OU=Admins,dc=cisco,dc=com$") ~= nil) ) then return true end return falseend)()
In this example, the string.find function allows for a regular expression. The $ at the end of the string anchors this string to the end of the distinguishedName field.
Use the escape character % in your search string in order to escape special characters such as ( ) . % + - * ? [ ^ $. For example, you can escape the - character in this string (OU=Admins,dc=my-domain,dc=com$) as shown in this string (OU=Admins,dc=my%-domain,dc=com$).
You can create a similar, basic logical expression for pattern matching of Active Directory (AD) group membership. Because users can be members of multiple groups, DAP parses the response from the LDAP server into separate entries and holds them in a table. In this case, a more advanced function is required in order to:
For example, if a user is a member of any group that endsend)()
This function uses DAP with action set to terminate:
(assert(function() local block_connection = true local update_threshold = "150000" --this is the value of lastupdate in seconds for k,v in pairs(endpoint.av) do if (CheckAndMsg(EVAL(v.exists, "EQ", "true", "string") and EVAL (v.lastupdate, "LT", update_threshhold, "integer"),k.." exists; last update is "..string.sub((tonumber(v.lastupdate)/86400), 1, 3).." days",k.." does not exist; last update is "..string.sub((tonumber(v.lastupdate)/86400), 1, 3).." days")) then block_connection = false end end return block_connectionend)())
Upon termination, it displays this message:
Login denied.<AV Name> does not exists; last update is <X> days
These Lua functions check for attributes related to anti-virus, anti-spyware, and firewall packages on the endpoint PC returned by Cisco Secure Desktop (CSD) host scan.
This custom function checks if CSD detects any anti-virus:
assert(function() for k,v in pairs(endpoint.av) do if (EVAL(v.exists, "EQ", "true", "string")) then return true end end return falseend)()
This example demonstrates how DAP can check for an anti-virus installation, check for the last update, and notify the user for remediation. It uses a function similar to that in Check for Anti-Virus Installation.
Set the authentication, authorization, and accounting (AAA) attributes you wish to match. In the advanced field, ensure the AND operation is selected; in the action field, ensure the terminate option is selected. If the user matches the AAA attributes and if the Lua function returns a value of true, DAP is selected, a message appears that explains why the DAP record was displayed, and the user connection is terminated. If the Lua function does not return a value of true, DAP does not match and permits access. In the message box field, enter the message, "No anti-virus program found, please install anti-virus and try again." If the user has an anti-virus package and is below the update days threshold, they are not given a message as indicated by the double quotes in line 7 of this example:
(assert(function() local block_connection = true local update_days = "15" --days local av_lastupdate = update_days*86400 for k,v in pairs(endpoint.av) do if (CheckAndMsg(EVAL(v.exists, "EQ", "true", "string") and EVAL(v.lastupdate, "LT", av_lastupdate, "integer"),"",k.." exists; but last update is greater than 15 days old. Expecting under 15 days.")) then block_connection = false elseif (EVAL(v.exists, "NE", "true", "string")) then block_connection = true end end return block_connectionend)())
If the user has Norton anti-virus, but the last update is greater than 15 days, this sample message appears:
NortonAV exists; but last update is greater than 15 days old. Expecting under 15 days.
If the EVAL does not match, it goes to the next function, matches, and returns a value of true. Since there is no CheckAndMsg associated with the second function, it uses the DAP message text:
No anti-virus program found, please install anti-virus and try again.
In summary, DAP looks for a user AAA and endpoint attribute in order to match DAP. If DAP matches, the user is terminated with a message. The endpoint match is a result of a Lua EVAL that returns true or false to DAP. A true matches and denies the connection. A false does not match and does permit the connection.
This custom function checks if CSD detects anti-spyware:
assert(function() for k,v in pairs(endpoint.as) do if (EVAL(v.exists, "EQ", "true", "string")) then return true end end return falseend)()
This custom function checks if CSD detects a firewall:
assert(function() for k,v in pairs(endpoint.fw) do if (EVAL(v.exists, "EQ", "true", "string")) then return true end end return falseend)()
This function returns a true if an anti-virus, anti-spyware or a firewall package is found:
assert(function() function check(antix) if (type(antix) == "table") then for k,v in pairs(antix) do if (EVAL(v.exists, "EQ", "true", "string")) then return true end end end return false end return (check(endpoint.av) or check(endpoint.fw) or check(endpoint.as))end)()
The only difference between this function and the function in Check for Anti-Spyware Installation is that 'not' precedes the assert.
not assert(function() for k,v in pairs(endpoint.as) do if (EVAL(v.exists, "EQ", "true", "string")) then return true end end return falseend)()
This example returns true if an anti-virus and a firewall are found and if the last update of the anti-virus is not greater than 30 days:
assert(function() function checkav(antix) if (type(antix) == "table") then for k,v in pairs(antix) do if (EVAL(v.activescan, "EQ", "ok", "string") and EVAL (v.lastupdate, "LT", "2592000", "integer")) then return true end end end return false end function checkfw(antix) if (type(antix) == "table") then for k,v in pairs(antix) do if (EVAL(v.enabled, "EQ", "ok", "string")) then return true end end end return false end return (checkav(endpoint.av) and checkfw(endpoint.fw))end)()
Because the firewall does not have a lastupdate value to return, it has a separate function.
This section describes functions that use regex expressions in order to match certain attributes and determine validity of the host machine. These regex capabilities have been tested and are valid:
This function uses regular expression matching in order to see if the hotfix list contains a pattern. In this example, Cisco Secure Desktop returns all hotfixes on the endpoint PC; if there is an instance of KB944, the DAP policy matches and is enforced.
assert(function () local pattern = "KB944" local true_on_match = true local match = false for k,v in pairs(endpoint.os.hotfix) do print(k) match = string.find(k, pattern) if (match) then if (true_on_match) then return true else return (false) end end endend)()
For example, if the host machine has hotfix KB944533 or hotfix KB944653, it matches the rule.
This function is similar to the one described in Connect If Endpoint PC Has Any Instance of Hotfix KB944. This function uses a regular expression in order to match the organizationally unique identifier (OUI) of the MAC address.
In this example, the MAC address starts with d067.e5XX.XX. Use a regular expression and Lua code in order to match machines that start with the same OUI MAC.
assert(function () local pattern = "^d067\.e5*" local true_on_match = true local match = false for k,v in pairs(endpoint.device.MAC) do print(k) match = string.find(k, pattern) if (match) then if (true_on_match) then return true else return (false) end end endend)()
Note: A different version of this function is required for a multi-valued checking.
This function uses regular expressions in order to determine if the first three letters of the hostname are msv (case insensitive):
assert(function() local match_pattern = "^[Mm][Ss][Vv]" local match_value = endpoint.device.hostname Lua expression is meant to connect if the device.id of the endpoint PC and the serial number on the certificate are the same:
assert(function() local match_pattern = endpoint.device.id local match_value = endpoint.certificate.user["*"].subject_e procedure provides an example of a configuration procedure with ASDM.
Windows 7 platforms are supported with CSD Release 3.5 or later. With the ASDM 6.2.x maintenance release and the 6.3.x releases, you can directly use the interface in order to check for Windows 7 OS. With earlier ASDM releases, an advanced DAP Lua script is required in order to check for Windows 7 machines. On an ASA with Release 8.x and pre-beta CSD Release 3.5, enter this Lua script string into the ASDM DAP advanced box in order to perform checks for Windows 7 machines:
(EVAL(endpoint.os.version,"EQ","Windows 7","string"))
This Lua expression lets you track specific mobile devices by their unique identifiers (UIDs). You can use DAP in order to achieve this basic functionality.
When the value cannot be hard-coded and needs to be read from the AD, this becomes more difficult. Because there is no specific UID field in AD, you can store the value for a particular user under a different field. This example uses otherHomePhone to store the UID.
To help you identify the UID for an iPhone or iPad, search the web for an appropriate tool.
Once you identify the UID, add it to otherHomePhone in the AD entry for that user.
From the debug ldap 255 command and from user test authentication, obtain the LDAP attribute being pushed, which is otherHomePhone.
Allow the phone to connect, then run a DAP trace during the attempted connection in order to identify the endpoint attribute that contains the UID (endpoint.anyconnect.deviceuniqueid).
This Lua expression can then compare the two parameters:
assert(function() if (type(aaa.ldap.otherHomePhone) ==type(endpoint.anyconnect.deviceuniqueid) then return true end return falseend)()
This procedure describes how to use DAP to prevent connection by a Chrome. | http://www.cisco.com/c/en/us/support/docs/security/asa-5500-x-series-next-generation-firewalls/115947-dap-adv-functions-00.html?referring_site=smartnavRD | CC-MAIN-2015-22 | refinedweb | 2,024 | 56.05 |
Tests the evaluation of functions like hp2D::Value. More...
#include <functionEvaluation.hh>
Tests the evaluation of functions like hp2D::Value.
These function compute something out of a FE function.
Definition at line 26 of file functionEvaluation.hh.
Definition at line 28 of file functionEvaluation.
Computes shape function with index
i in point
p.
Test the evaluation of hp2D::Trace on the edges.
Tests the evaluation of hp2D::Value at an arbitrary point in an element.
This point does not need to conincide with an integration point.
Tests if the evaluation of hp2D::Value at the integration points and at arbitrary points (chosen in the integration points) give the same results.
Tests the evaluation of hp2D::Value at the integration points. | http://www.math.ethz.ch/~concepts/doxygen/html/classtest_1_1FunctionEvaluation.html | crawl-003 | refinedweb | 120 | 52.66 |
Opened 3 years ago
Closed 3 years ago
#6025 closed bug (fixed)
GHC Panic On Recompile
Description (last modified by simonpj)
Perhaps related to DataKinds or ConstraintKinds, but on rebuild without a clean, the following error is emitted:
Building glyph-0.1... Preprocessing executable 'glyph' for glyph-0.1... [34 of 60] Compiling Language.Glyph.HM.InferType ( src/Language/Glyph/HM/InferType.hs, dist/build/glyph/glyph-tmp/Language/Glyph/HM/InferType.p_o ) ghc: panic! (the 'impossible' happened) (GHC version 7.4.1 for x86_64-unknown-linux): tyThingTyCon Data constructor `main:Language.Glyph.Type.Normal{d r3iU}' Please report this as a GHC bug:
This can be recreated by building this cabal project:
To recreate:
git clone cabal configure cabal build touch src/Language/Glyph/HM/InferType.hs cabal build
Attachments (1)
Change History (7)
comment:1 Changed 3 years ago by simonpj
Changed 3 years ago by scooty-puff
ghc-panic tag of sonyandy/glyph from github
comment:2 Changed 3 years ago by scooty-puff
Try:
git clone git://github.com/sonyandy/glyph.git git checkout ghc-panic cabal configure cabal build touch src/Language/Glyph/HM/InferType.hs cabal build
Or, try the attached .zip file. I will also try to narrow it down to a small set of modules, but will probably not have anything for a few days.
comment:3 Changed 3 years ago by scooty-puff
Other.hs:
{-# LANGUAGE DataKinds, GADTs #-} module Other (Other (..)) where data Other a where OTrue :: Other True OFalse :: Other False
It can probably also be recreated using KindSignatures, but I have not attempted this.
Main.hs:
module Main (main) where import Other other = OTrue main :: IO () main = return ()
To recreate:
ghc --make Main.hs touch Main.hs ghc --make Main.hs
Output:
$ ghc --make Main.hs [2 of 2] Compiling Main ( Main.hs, Main.o ) ghc.exe: panic! (the 'impossible' happened) (GHC version 7.4.1 for i386-unknown-mingw32): tyThingTyCon Data constructor `ghc-prim:GHC.Types.True{(w) d 6u}' Please report this as a GHC bug:
comment:4 Changed 3 years ago by simonpj
Thanks. Yes, that kills 7.4.1 as you say, but it's fine in HEAD. So I think it's fixed already. Cany you try your main application with HEAD?
(We don't support kind polymorphism or data kinds in 7.4.1 although much of the machinery is there.)
Simon
comment:5 Changed 3 years ago by scooty-puff
It is fixed in the latest HEAD snapshot. (Unfortunately, some new issues when compiling an associated library, but caused by the library, and unrelated to this ticket) I am leaving the status as is - not sure if it should be "fixed" or "invalid" (or the what the process might be). Thanks!
comment:6 Changed 3 years ago by simonpj
- Resolution set to fixed
- Status changed from new to closed
- Test Case set to polykinds/T6025
Thanks. I've added a regression test.
Alas the first repro step fails:
I have no clue what is going on. Does it need to be https? Maybe upload a tar file? Ideally cut-down a bit (especially if there are zillions of dependencies.) | https://ghc.haskell.org/trac/ghc/ticket/6025 | CC-MAIN-2015-14 | refinedweb | 527 | 67.45 |
In this section, we first introduce the Deutsch-Jozsa problem, and classical and quantum algorithms to solve it. We then implement the quantum algorithm using Qiskit, and run it on a simulator and device.
Contents
- Introduction
1.1 Deutsch-Jozsa Problem
1.2 Deutsch-Jozsa Algorithm
1.3 The Quantum Solution
1.4 Why Does This Work?
- Worked Example
- Creating Quantum Oracles
- Qiskit Implementation
4.1 Constant Oracle
4.2 Balanced Oracle
4.3 The Full Algorithm
4.4 Generalised Circuit
- Running on Real Devices
- Problems
- References
The Deutsch-Jozsa algorithm, first introduced in Reference [1], was the first example of a quantum algorithm that performs better than the best classical algorithm. It showed that there can be advantages to using a quantum computer as a computational tool for a specific problem.
1.1 Deutsch-Jozsa Problem
We are given a hidden Boolean function $f$, which takes as input a string of bits, and returns either $0$ or $1$, that is:$$ f(\{x_0,x_1,x_2,...\}) \rightarrow 0 \textrm{ or } 1 \textrm{ , where } x_n \textrm{ is } 0 \textrm{ or } 1$$
The property of the given Boolean function is that it is guaranteed to either be balanced or constant. A constant function returns all $0$'s or all $1$'s for any input, while a balanced function returns $0$'s for exactly half of all inputs and $1$'s for the other half. Our task is to determine whether the given function is balanced or constant.
Note that the Deutsch-Jozsa problem is an $n$-bit extension of the single bit Deutsch problem.
1.2 The Classical Solution
Classically, in the best case, two queries to the oracle can determine if the hidden Boolean function, $f(x)$, is balanced: e.g. if we get both $f(0,0,0,...)\rightarrow 0$ and $f(1,0,0,...) \rightarrow 1$, then we know the function is balanced as we have obtained the two different outputs.
In the worst case, if we continue to see the same output for each input we try, we will have to check exactly half of all possible inputs plus one in order to be certain that $f(x)$ is constant. Since the total number of possible inputs is $2^n$, this implies that we need $2^{n-1}+1$ trial inputs to be certain that $f(x)$ is constant in the worst case. For example, for a $4$-bit string, if we checked $8$ out of the $16$ possible combinations, getting all $0$'s, it is still possible that the $9^\textrm{th}$ input returns a $1$ and $f(x)$ is balanced. Probabilistically, this is a very unlikely event. In fact, if we get the same result continually in succession, we can express the probability that the function is constant as a function of $k$ inputs as:$$ P_\textrm{constant}(k) = 1 - \frac{1}{2^{k-1}} \qquad \textrm{for } k \leq 2^{n-1}$$
Realistically, we could opt to truncate our classical algorithm early, say if we were over x% confident. But if we want to be 100% confident, we would need to check $2^{n-1}+1$ inputs.
1.3 Quantum Solution
Using a quantum computer, we can solve this problem with 100% confidence after only one call to the function $f(x)$, provided we have the function $f$ implemented as a quantum oracle, which maps the state $\vert x\rangle \vert y\rangle $ to $ \vert x\rangle \vert y \oplus f(x)\rangle$, where $\oplus$ is addition modulo $2$. Below is the generic circuit for the Deutsh-Jozsa algorithm.
Now, let's go through the steps of the algorithm:
- Prepare two quantum registers. The first is an $n$-qubit register initialised to $|0\rangle$, and the second is a one-qubit register initialised to $|1\rangle$: $$\vert \psi_0 \rangle = \vert0\rangle^{\otimes n} \vert 1\rangle$$
- Apply a Hadamard gate to each qubit: $$\vert \psi_1 \rangle = \frac{1}{\sqrt{2^{n+1}}}\sum_{x=0}^{2^n-1} \vert x\rangle \left(|0\rangle - |1 \rangle \right)$$
- Apply the quantum oracle $\vert x\rangle \vert y\rangle$ to $\vert x\rangle \vert y \oplus f(x)\rangle$: $$ \begin{aligned} \lvert \psi_2 \rangle & = \frac{1}{\sqrt{2^{n+1}}}\sum_{x=0}^{2^n-1} \vert x\rangle (\vert f(x)\rangle - \vert 1 \oplus f(x)\rangle) \\ & = \frac{1}{\sqrt{2^{n+1}}}\sum_{x=0}^{2^n-1}(-1)^{f(x)}|x\rangle ( |0\rangle - |1\rangle ) \end{aligned} $$ since for each $x,f(x)$ is either $0$ or $1$.
- At this point the second single qubit register may be ignored. Apply a Hadamard gate to each qubit in the first register: $$ \begin{aligned} \lvert \psi_3 \rangle & = \frac{1}{2^n}\sum_{x=0}^{2^n-1}(-1)^{f(x)} \left[ \sum_{y=0}^{2^n-1}(-1)^{x \cdot y} \vert y \rangle \right] \\ & = \frac{1}{2^n}\sum_{y=0}^{2^n-1} \left[ \sum_{x=0}^{2^n-1}(-1)^{f(x)}(-1)^{x \cdot y} \right] \vert y \rangle \end{aligned} $$ where $x \cdot y = x_0y_0 \oplus x_1y_1 \oplus \ldots \oplus x_{n-1}y_{n-1}$ is the sum of the bitwise product.
- Measure the first register. Notice that the probability of measuring $\vert 0 \rangle ^{\otimes n} = \lvert \frac{1}{2^n}\sum_{x=0}^{2^n-1}(-1)^{f(x)} \rvert^2$, which evaluates to $1$ if $f(x)$ is constant and $0$ if $f(x)$ is balanced.
1.4 Why Does This Work?
- Constant Oracle
When the oracle is constant, it has no effect (up to a global phase) on the input qubits, and the quantum states before and after querying the oracle are the same. Since the H-gate is its own inverse, in Step 4 we reverse Step 2 to obtain the initial quantum state of $|00\dots 0\rangle$ in the first register.$$ H^{\otimes n}\begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix} = \tfrac{1}{\sqrt{2^n}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix} \quad \xrightarrow{\text{after } U_f} \quad H^{\otimes n}\tfrac{1}{\sqrt{2^n}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix} $$
- Balanced Oracle
After step 2, our input register is an equal superposition of all the states in the computational basis. When the oracle is balanced, phase kickback adds a negative phase to exactly half these states:$$ U_f \tfrac{1}{\sqrt{2^n}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix} = \tfrac{1}{\sqrt{2^n}}\begin{bmatrix} -1 \\ 1 \\ -1 \\ \vdots \\ 1 \end{bmatrix} $$
The quantum state after querying the oracle is orthogonal to the quantum state before querying the oracle. Thus, in Step 4, when applying the H-gates, we must end up with a quantum state that is orthogonal to $|00\dots 0\rangle$. This means we should never measure the all-zero state.
2. Worked Example
Let's go through a specific example for a two bit balanced function:
- The first register of two qubits is initialized to $|00\rangle$ and the second register qubit to $|1\rangle$ (Note that we are using subscripts 1, 2, and 3 to index the qubits. A subscript of "12" indicates the state of the register containing qubits 1 and 2) $$\lvert \psi_0 \rangle = \lvert 0 0 \rangle_{12} \otimes \lvert 1 \rangle_{3} $$
- Apply Hadamard on all qubits $$\lvert \psi_1 \rangle = } $$
- The oracle function can be implemented as $\text{Q}_f = CX_{13}CX_{23}$, $$ \begin{align*} \lvert \psi_2 \rangle = \frac{1}{2\sqrt{2}} \left[ \lvert 0 0 \rangle_{12} \otimes \left( \lvert 0 \oplus 0 \oplus 0 \rangle - \lvert 1 \oplus 0 \oplus 0 \rangle \right)_{3} \\ + \lvert 0 1 \rangle_{12} \otimes \left( \lvert 0 \oplus 0 \oplus 1 \rangle - \lvert 1 \oplus 0 \oplus 1 \rangle \right)_{3} \\ + \lvert 1 0 \rangle_{12} \otimes \left( \lvert 0 \oplus 1 \oplus 0 \rangle - \lvert 1 \oplus 1 \oplus 0 \rangle \right)_{3} \\ + \lvert 1 1 \rangle_{12} \otimes \left( \lvert 0 \oplus 1 \oplus 1 \rangle - \lvert 1 \oplus 1 \oplus 1 \rangle \right)_{3} \right] \end{align*} $$
- Simplifying this, we get the following: $$ \begin{aligned} \lvert \psi_2 \rangle & = \frac{1}{2\sqrt{2}} \left[ \lvert 0 0 \rangle_{12} \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} - \lvert 0 1 \rangle_{12} \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} - \lvert 1 0 \rangle_{12} \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} + \lvert 1 1 \rangle_{12} \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} \right] \\ & = } \\ & = \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{1} \otimes \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{2} \otimes \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} \end{aligned} $$
- Apply Hadamard on the first register $$ \lvert \psi_3\rangle = \lvert 1 \rangle_{1} \otimes \lvert 1 \rangle_{2} \otimes \left( \lvert 0 \rangle - \lvert 1 \rangle \right)_{3} $$
- Measuring the first two qubits will give the non-zero $11$, indicating a balanced function.
You can try out similar examples using the widget below. Press the buttons to add H-gates and oracles, re-run the cell and/or set
case="constant" to try out different oracles.
from qiskit_textbook.widgets import dj_widget dj_widget(size="small", case="balanced")
3. Creating Quantum Oracles
Let's see some different ways we can create a quantum oracle.
For a constant function, it is simple:
$\qquad$ 1. if f(x) = 0, then apply the $I$ gate to the qubit in register 2.
$\qquad$ 2. if f(x) = 1, then apply the $X$ gate to the qubit in register 2.
For a balanced function, there are many different circuits we can create. One of the ways we can guarantee our circuit is balanced is by performing a CNOT for each qubit in register 1, with the qubit in register 2 as the target. For example:
In the image above, the top three qubits form the input register, and the bottom qubit is the output register. We can see which input states give which output in the table below:
We can change the results while keeping them balanced by wrapping selected controls in X-gates. For example, see the circuit and its results table below:
# initialization import numpy as np # importing Qiskit from qiskit import IBMQ, BasicAer from qiskit.providers.ibmq import least_busy from qiskit import QuantumCircuit, execute # import basic plot tools from qiskit.visualization import plot_histogram
Next, we set the size of the input register for our oracle:
# set the length of the n-bit input string. n = 3
# set the length of the n-bit input string. n = 3 const_oracle = QuantumCircuit(n+1) output = np.random.randint(2) if output == 1: const_oracle.x(n) const_oracle.draw()
balanced_oracle = QuantumCircuit(n+1)
Next, we create a balanced oracle. As we saw in section 1b, we can create a balanced oracle by performing CNOTs with each input qubit as a control and the output bit as the target. We can vary the input states that give 0 or 1 by wrapping some of the controls in X-gates. Let's first choose a binary string of length
n that dictates which controls to wrap:
b_str = "101"
Now we have this string, we can use it as a key to place our X-gates. For each qubit in our circuit, we place an X-gate if the corresponding digit in
b_str is
1, or do nothing if the digit is
0.
balanced_oracle = QuantumCircuit(n+1) b_str = "101" # Place X-gates for qubit in range(len(b_str)): if b_str[qubit] == '1': balanced_oracle.x(qubit) balanced_oracle.draw()
Next, we do our controlled-NOT gates, using each input qubit as a control, and the output qubit as a target:() balanced_oracle.draw()
Finally, we repeat the code from two cells up to finish wrapping the controls in X-gates:() # Place X-gates for qubit in range(len(b_str)): if b_str[qubit] == '1': balanced_oracle.x(qubit) # Show oracle balanced_oracle.draw()
We have just created a balanced oracle! All that's left to do is see if the Deutsch-Joza algorithm can solve it.
4.3 The Full Algorithm
Let's now put everything together. This first step in the algorithm is to initialise the input qubits in the state $|{+}\rangle$ and the output qubit in the state $|{-}\rangle$:
dj_circuit = QuantumCircuit(n+1, n) # Apply H-gates for qubit in range(n): dj_circuit.h(qubit) # Put qubit in state |-> dj_circuit.x(n) dj_circuit.h(n) dj_circuit.draw()
Next, let's apply the oracle. Here we apply the
balanced_oracle we created above:
dj_circuit = QuantumCircuit(n+1, n) # Apply H-gates for qubit in range(n): dj_circuit.h(qubit) # Put qubit in state |-> dj_circuit.x(n) dj_circuit.h(n) # Add oracle dj_circuit += balanced_oracle dj_circuit.draw()
Finally, we perform H-gates on the $n$-input qubits, and measure our input register:
dj_circuit = QuantumCircuit(n+1, n) # Apply H-gates for qubit in range(n): dj_circuit.h(qubit) # Put qubit in state |-> dj_circuit.x(n) dj_circuit.h(n) # Add oracle dj_circuit += balanced_oracle # Repeat H-gates for qubit in range(n): dj_circuit.h(qubit) dj_circuit.barrier() # Measure for i in range(n): dj_circuit.measure(i, i) # Display circuit dj_circuit.draw()
Let's see the output:
# use local simulator backend = BasicAer.get_backend('qasm_simulator') shots = 1024 results = execute(dj_circuit, backend=backend, shots=shots).result() answer = results.get_counts() plot_histogram(answer)
We can see from the results above that we have a 0% chance of measuring
000. This correctly predicts the function is balanced.
4.4 Generalised Circuits
Below, we provide a generalised function that creates Deutsch-Joza oracles and turns them into quantum gates. It takes the
case, (either
'balanced' or '
constant', and
n, the size of the input register:
def dj_oracle(case, n): # We need to make a QuantumCircuit object to return # This circuit has n+1 qubits: the size of the input, # plus one output qubit oracle_qc = QuantumCircuit(n+1) # First, let's deal with the case in which oracle is balanced if case == "balanced": # First generate a random number that tells us which CNOTs to # wrap in X-gates: b = np.random.randint(1,2**n) # Next, format 'b' as a binary string of length 'n', padded with zeros: b_str = format(b, '0'+str(n)+'b') # Next, we place the first X-gates. Each digit in our binary string # corresponds to a qubit, if the digit is 0, we do nothing, if it's 1 # we apply an X-gate to that qubit: for qubit in range(len(b_str)): if b_str[qubit] == '1': oracle_qc.x(qubit) # Do the controlled-NOT gates for each qubit, using the output qubit # as the target: for qubit in range(n): oracle_qc.cx(qubit, n) # Next, place the final X-gates for qubit in range(len(b_str)): if b_str[qubit] == '1': oracle_qc.x(qubit) # Case in which oracle is constant if case == "constant": # First decide what the fixed output of the oracle will be # (either always 0 or always 1) output = np.random.randint(2) if output == 1: oracle_qc.x(n) oracle_gate = oracle_qc.to_gate() oracle_gate.name = "Oracle" # To show when we display the circuit return oracle_gate
Let's also create a function that takes this oracle gate and performs the Deutsch-Joza algorithm on it:
def dj_algorithm(oracle, n): dj_circuit = QuantumCircuit(n+1, n) # Set up the output qubit: dj_circuit.x(n) dj_circuit.h(n) # And set up the input register: for qubit in range(n): dj_circuit.h(qubit) # Let's append the oracle gate to our circuit: dj_circuit.append(oracle, range(n+1)) # Finally, perform the H-gates again and measure: for qubit in range(n): dj_circuit.h(qubit) for i in range(n): dj_circuit.measure(i, i) return dj_circuit
Finally, let's use these functions to play around with the algorithm:
n = 4 oracle_gate = dj_oracle('balanced', n) dj_circuit = dj_algorithm(oracle_gate, n) dj_circuit.draw()
And see the results of running this circuit:
results = execute(dj_circuit, backend=backend, shots=1024).result() answer = results.get_counts() plot_histogram(answer)
# Load our saved IBMQ accounts and get the least busy backend device with greater than or equal to (n+1) qubits IBMQ.load_account() provider = IBMQ.get_provider(hub='ibm-q') backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= (n+1) and not x.configuration().simulator and x.status().operational==True)) print("least busy backend: ", backend)
least busy backend: ibmq_burlington
# Run our circuit on the least busy backend. Monitor the execution of the job in the queue from qiskit.tools.monitor import job_monitor shots = 1024 job = execute(dj_circuit, backend=backend, shots=shots, optimization_level=3) job_monitor(job, interval = 2)
Job Status: job has successfully run
# Get the results of the computation results = job.result() answer = results.get_counts() plot_histogram(answer)
As we can see, the most likely result is
1111. The other results are due to errors in the quantum computation.
from qiskit_textbook.problems import dj_problem_oracle oracle = dj_problem_oracle(1)
- The function
dj_problem_oracle(shown above) returns a Deutsch-Joza oracle for
n = 4in the form of a gate. The gate takes 5 qubits as input where the final qubit (
q_4) is the output qubit (as with the example oracles above). You can get different oracles by giving
dj_problem_oracledifferent integers between 1 and 5. Use the Deutsch-Joza algorithm to decide whether each oracle is balanced or constant (Note: It is highly recommended you try this example using the
qasm_simulatorinstead of a real device).
7. References
- David Deutsch and Richard Jozsa (1992). "Rapid solutions of problems by quantum computation". Proceedings of the Royal Society of London A. 439: 553–558. doi:10.1098/rspa.1992.0167.
- R. Cleve; A. Ekert; C. Macchiavello; M. Mosca (1998). "Quantum algorithms revisited". Proceedings of the Royal Society of London A. 454: 339–354. doi:10.1098/rspa.1998.0164.'} | https://qiskit.org/textbook/ch-algorithms/deutsch-jozsa.html | CC-MAIN-2020-45 | refinedweb | 2,983 | 52.09 |
Last time I had a similar situation
I paid taxes in the state in which the work was performed. As you live in NY, you are required to pay taxes to the State of NY and not any other.
I have also moved from one state to another in the middle of a tax year. In that case, I paid taxes on only money earned in each individual state.
As this is W2, the company cutting your check should be taking care of this for you.
As far as I know, Maniac
Your income may be earned in NC for work you do in VA but you LIVE in NY and that is the driving force.
I worked for a company that had a client presence in MN, was headquartered in TX and was incorporated in DE. I worked across all 50 states and Canada. But my residency was MN so I paid taxes to MN.
The only variable here is where you pay State tax. Your Fed tax is evaluated the same way no matter where the employer is or where the work is performed.
I'll take a look around and see if I can find you some pertinent links.
It doesn't make sense to pay taxes
to any other state but the one you live in. The supposed purpose of state tax is to pay for that state's services that you use, right? So what service is Virginia's government providing to you?
It's no different than physically traveling to the work location...
Telecommuting is no different than sending you physically to the location to perform the work on an expense account, except that it saves THEM money. You're still taxed on where your work HQ is located. Your employer will be required to withhold VA income taxes, since that's where THEY are HQ'd at and where your office would be located if they were sending you out on an expense account (travel). However, you'll be able to file a non-resident return at the end of the year. When you do file your taxes at the end of the year, you'll have to file a NY state tax return (if they require one) and claim income earned in another state (VA). So, file your VA return first to get your money back from them so that you can, in turn, roll it over to NY on their state tax return.
To compare, people live in the State of Washington and commute to Oregon to work. Oregon taxes are withheld from their paychecks, but at the end of the year, they file a non-resident Oregon tax return to get it all back because their state of residence is Washington. Washington doesn't have an income tax. They work off of sales taxes. Oregon residents who shop in Washington provide proof of residence so they don't have to pay Washington sales tax.
Your best bet is to talk to a tax accountant locally (NY). They can tell you how to fill out your W4 form and what returns will be required at the end of the year.
Contact a Tax Account
What you need to do, is work with an accountant that knows tax law. If you go to a tax prep firm, make sure to work with a year round and not a seasonal employee.
Only a tax professional is going to be able to point you to the correct forms and information that you will need for your situation.
Depending on the state to state reciprocal agreements, you may end up paying partial taxes to all 3 states.
Don't forget to have the tax man check into city tax issues to.
Even if you think you have things covered, a tax accountant will be able to help you should the HR department balk at the way you need to fill out state W4 forms.
Chas
Question about telecommuting and taxation
According to the New York General Income Tax Information you would pay taxes based on where you live.
Definitions The following definitions are applicable in determining whether or not you
are a New York State resident for income tax purposes.
Domicile New York domicile does not
change until you can demonstrate that you have abandoned your New York
domicile and established a new domicile outside New York State.
(pg 5)
Question about telecommuting and taxation
of my home?
I found the following information regarding a proposed bill to offer telecommuter tax relief to those living outside of NY but performing telecommuter work in NY, but it doesn't address the reverse situation.
I urge you all to support this bill because it's not fair to be double taxed if you're not physically present in the state that work is being performed in. The problem with our tax laws is that they never took technology into account that would allow tech workers to work remotely.
This conversation is currently closed to new comments. | https://www.techrepublic.com/forums/discussions/question-about-telecommuting-and-taxation/ | CC-MAIN-2017-39 | refinedweb | 839 | 69.01 |
This tutorial will show you how to build a plug-in for Windows Live Writer. Windows Live Writer is a blogging tool that Microsoft provides for free. It includes an open API for .NET developers to create custom plug-ins. In this tutorial, I will show you how easy it is to build one.
Full source and binaries are now hosted on CodePlex.
Open VS2008 or VS2010 and create a new project. Set the target framework to 2.0, Application Type to Class Library and give it a name. In this tutorial, we are going to create a plug-in that generates a twitter message with your blog post name and a TinyUrl link to the blog post. It will do all of this automatically after you publish your post.
Once, we have a new projected created. We need to setup the references.
Add a reference to the WindowsLive.Writer.Api.dll located in the C:\Program Files (x86)\Windows Live\Writer\ folder, if you are using X64 version of Windows.
You will also need to add a reference to
from the .NET tab as well.
Once that is complete, add your “using” statements so that it looks like whats shown below:
Now, we are going to setup some build events to make it easier to test our custom class. Go into the Properties of your project and select Build Events, click edit the Post-build and copy/paste the following line:
XCOPY /D /Y /R "$(TargetPath)" "C:\Program Files (x86)\Windows Live\Writer\Plugins\"
Your screen should look like the one pictured below:
Next, we are going to launch an external program on debug. Click the debug tab and enter
C:\Program Files (x86)\Windows Live\Writer\WindowsLiveWriter.exe
Now we have a blank project and we need to add some code. We start with adding the attributes for the Live Writer Plugin. Before we get started creating the Attributes, we need to create a GUID. This GUID will uniquely identity our plug-in.
So, to create a GUID follow the steps in VS2008/2010.
Click Tools from the VS Menu ->Create GUID
It will generate a GUID like the one listed below:
We only want what’s inside the quotes, so your final product should be: "56ED8A2C-F216-420D-91A1-F7541495DBDA". Go ahead and paste this snipped into your class just above the public class.
So far, it should look like the following:
Next, we need to implement the PublishNotifcationHook class and override the OnPostPublish. I’m not going to dive into what the code is doing as you should be able to follow pretty easily. The code below is the entire code used in the project.
We are going to go ahead and create a method to create the short url (tinyurl).
Go ahead and build your project, it should have copied the .DLL into the Windows Live Writer Plugin Directory. If it did not, then you will want to check your configuration.
Once that is complete, open Windows Live Writer and select Tools-> Options-> Plug-ins and enable your plug-in that you just created.
Go ahead and click OK and publish your blog post. You should get a pop-up with the following:
Hit OK and It should open a Twitter and either ask for a login or fill in your status as shown below:
That should do it, you can do so many other things with the API. I suggest that if you want to build something really useful consult the MSDN pages. This plug-in that I created was perfect for what I needed and I hope someone finds it useful. | http://geekswithblogs.net/mbcrump/archive/2010/05/30/building-a-plug-in-for-windows-live-writer.aspx | CC-MAIN-2013-20 | refinedweb | 608 | 73.47 |
In the future please give complete concise details of how your apps
are deployed. I'm guessing that you would say that you have two
geronimo instances running, either on the same machine or separate
machines, and that your ejb jar is deployed on one of the geronimo
instances and the war on the other geronimo instance.
In these circumstances I believe you would either need to use corba
to use the javaee java:comp/env namespace or use the openejb
proprietary jndi lookup, supplying the appropriate jndi properties
when you construct the context.
thanks
david jencks
On Jan 6, 2008, at 4:06 PM, SergZ wrote:
>
> Hello.
> My war is not sensitive to geronimo-web.xml
>
> in servlet:
>
> my_bean = (beanRemote)ctx.lookup("bean123Remote");
>
> in geronimo-web.xml:
> | http://mail-archives.apache.org/mod_mbox/geronimo-user/200801.mbox/raw/%3CB6C8A0DE-EF77-4D3E-A259-324B173E4D64@yahoo.com%3E/ | CC-MAIN-2015-11 | refinedweb | 128 | 62.98 |
Yeah - with the T4 redefine to linker attribute the old - make it NULL on ARM's will show up. I must have missed it before so not a recent change.
Yeah - with the T4 redefine to linker attribute the old - make it NULL on ARM's will show up. I must have missed it before so not a recent change.
Kurt/Tim
I just downloaded the latest version of ILI9341/GFX as a quick test and put it into the core libraries. I don't remember getting that message when I did the original testing on the adafruit ili9341 library. Anyway, I ran two of the graphics test sketches, one generic and the other identified graphicstest_featherwing:
1. using the featherwing version I didn't get the "progmem" and no error but they do have this identified for the teensyduino:
2. For then normal graphicstest example. If I use the constructor2. For then normal graphicstest example. If I use the constructorCode:#ifdef TEENSYDUINO #define TFT_DC 10 #define TFT_CS 4 #define STMPE_CS 3 #define SD_CS 8 #endifI will get the PROGMEM error. However, if I use,I will get the PROGMEM error. However, if I use,Code://Adafruit_ILI9341 tft = Adafruit_ILI9341(TFT_CS, TFT_DC);you don't get that error.you don't get that error.Code:Adafruit_ILI9341 tft = Adafruit_ILI9341(TFT_CS, TFT_DC, 11, 13, 1, 12);
3. Ran SSD1306 example as well and got the same error in glcdfont.c.
EDIT: This also resolves other issues as well
The fix is rather simple in that file. Put this as a replacement for the pgmspace define:
EDIT1: Looks like our messages crossed. This looks like it would be more appropriate:EDIT1: Looks like our messages crossed. This looks like it would be more appropriate:Code:#ifdef __AVR__ #include <avr/io.h> #include <avr/pgmspace.h> #elif defined(ESP8266) #include <pgmspace.h> #elif defined(TEENSYDUINO) #include <avr/pgmspace.h> #else #define PROGMEM #endif
Code:#if defined(__AVR__) || defined(TEENSYDUINO) #include <avr/io.h> #include <avr/pgmspace.h> #elif defined(ESP8266) #include <pgmspace.h> #else #define PROGMEM #endif
EDIT2: The interesting thing is for the T3.x test compiles shows its happy with the original defines. Exception is I get this warning:
Code:F:\arduino-1.8.8-t4\hardware\teensy\avr\cores\teensy3\Stream.cpp: In member function 'bool Stream::findUntil(const char*, const char*)':
Last edited by mjs513; 02-05-2019 at 07:38 AM.
@Paul.
Got it. Would it possible for you to maintain a copy of GFX in your repositories? Reason I am asking is I just went over the adafruit issue page but there doesn't appear to much support going on right now - a ton of open issues and PRs?
Doing this before T4 is released obviously wouldn't be feasible. Just to be realistic, right after T4, I'm planning to put some time into long-planned Arduino IDE improvements - with a goal of being able to demo at the San Mateo Maker Faire in May (the next time I'll meet in person with Massimo, Fabio, Luca, Sandeep and the other Arduino folks).
My next priority after IDE features is a long list of audio library improvements. I've kind-of neglected the audio lib for these last couple years while focusing on Teensy 3.6 and USB host. So much work is needed there...
I'd also really like to talk, ideally in person, with Limor, Phil T, Scott, Phil B, Kevin, John and others at Adafruit before doing something like this. Not sure if I'm going to make a visit to the east coast though (though Scott & Phil B are here on the west coast).
I'll update the Adafruit libs sooner. GFX should be simple. ST7735 is a mess and really needs to be broken off to a "_t3" version like we have for ILI9341. Long-term, I want to migrate to not having Adafruit's libs included with the Teensyduino installer. The ones we have in there now are sort of a legacy from the days before Arduino had their library manager for easy install of libs, and back when Adafruit's approach to libraries was very 8 bit AVR focused. So much has changed in the last ~4 years.
- I just ordered a pair from your AliExp link to be delivered in hopefully under 42 days … maybe longer …- I just ordered a pair from your AliExp link to be delivered in hopefully under 42 days … maybe longer …
Really promising seeing the FlexIO do 2nd SPI (and running the small display test about as quickly at 100 MHz as 600 MHz leaves lots of headroom) - and also hearing the Audio board - so much good.
Using three windows of TyComm for sermon (and uploads) working well - even swapped USB cable between T4 and a T_3.1 and it came up running with the other T_3.1 active.
And I saw my pull request to fix blink rate on fault went in - it was half rate with the Speed drop in beta8
I don't know much about OpenGL, but it is kind of a standard, and maybe supporting at least a small subset of its api might be an idea.. There seems to be a smaller version OpenGL ES which is aimed at embedded platforms.
Tim, at least some SPI Raspberry Displays are with ILI9488 (or 9486 - what is the difference?) - They just have a different connector. ( I also have one of these and I hope I can make it work, too) Waveshare has one on their site and they claim it supports 125MHz SPI - Not sure if it is ILI9486 or 9488.
D cache tests
FWIW, I did some tests with D cache on and off (i think), reading data from stack (DTCM), or OCRAM (malloc), or PROGMEM.
1000 reps of inner product of 2 1000-element float vectors (should be data.Code:SCB_CCR 0x70200 SCB_ID_CCSIDR 0xF01FE019 SCB_ID_CSSELR 0x0 SCB_ID_CLIDR 0x9000003 N 1000 REPS 1000 memory addr us mflops sum stack/DTCM 0x2001E088 11676 171.3 38217888 malloc/OCRAM 0x60002204 11733 170.5 38217888 PROGMEM/flash 0x20200008 11676 171.3 38217888 D cache off stack/DTCM 0x2001E088 11676 171.3 38217888 malloc/OCRAM 0x60002204 1132899 1.8 38217888 PROGMEM/flash 0x20200008 80016 25.0 38217888 SCB_CCR 0x60200 D cache on 10000-element vectors, 100 reps, vectors bigger than 32KB Dcache N 10000 REPS 100 memory addr us mflops sum stack/DTCM 0x2000C748 11668 171.4 0 malloc/OCRAM 0x6000AEA4 213298 9.4 0 PROGMEM/flash 0x20200008 18950 105.5 0
It doesn't appear that I can write to PROGMEM, is that correct?
EDIT: fixed typo and now DCCISW cache clear works. github updated.
Last edited by manitou; 02-06-2019 at 03:24 PM.
Confirmed, no write access. Any attempt to write should memfault, since that region is configured as READONLY in startup.c.Confirmed, no write access. Any attempt to write should memfault, since that region is configured as READONLY in startup.c.It doesn't appear that I can write to PROGMEM, is that correct?
If you change this, or use NXP's SDK examples, I have no idea what it will try to do if you write. But I'm very confident it won't actually work.If you change this, or use NXP's SDK examples, I have no idea what it will try to do if you write. But I'm very confident it won't actually work.Code:SCB_MPU_RBAR = 0x60000000 | REGION(5); // QSPI Flash SCB_MPU_RASR = MEM_CACHE_WBWA | READONLY | SIZE_16M;
At some point I'm going to dive into implementing the EEPROM library... and I'm really not looking forward to it. One of the longest and most painful experiences I've had with this chip, back before the SDK existed, was experimenting with FlexSPI by writing little programs in RAM over SWD. FlexSPI is incredibly powerful and configurable, but incredibly complex. Some parts (especially on the AHB side) I still don't fully understand.
To make EEPROM emulation work, code running from RAM is going to have to take control of the FlexSPI, backup some of its state and some of the LUTs to RAM, then reconfigure some LUTs to query the state of the flash chip, so that too can be restored. Then more LUTs used to get it into a state for writing, then do the operation and wait for the chip to not be busy. Then the tricky part will be getting the flash chip back to the exact same mode, and restore all the LUTs and other flexspi state, delete caches, and cross finger and hope the next bus cycle accessing the 0x600xxxxx memory range doesn't crash or lock up.
There's also a thorny issue of what to do if the chip reboots, like by a watchdog timer. I have found several ways the bootloader can leave the flash chip configured which aren't accessible by NXP's ROM.
Also a possible issue, but one I'm less worried about, is if the user presses the program button. I put a *lot* of work into the bootloader to be able to initialize the flash chip regardless of what state it's currently configured. The bootloader will correctly cause an in-progress write or erase to abort (but leaving the targeted memory unknown), and it knows how to navigate (hopefully) all of the flash chip's modes. NXP's ROM definitely does not.
Edit: maybe I should also mention all the beta boards have unlocked flash chips. They all have a "restore image" in the top 4096 bytes, which is writable. The non-volatile bits in the 3 status registers are all writable. In the final release, all these will be permanently locked, so you can't destroy the restore image or permanently reconfigure your flash. But in the beta boards you can... if you can figure out the complexities of the FlexSPI and its LUTs (which took me a couple rather frustrating months.... before the SDK was published)
Edit again: Regarding this question:".why is OCRAM slower than PROGMEM?
Anyway, my best guess is the AHB RX buffer is acting as a small 1K cache, even if the ARM data cache is turned off.
Still, I'm surprised it would be faster than the OCRAM....
Last edited by PaulStoffregen; 02-05-2019 at 01:18 PM.
Maybe it just uses wrong reset/default values, like with the PFDs.
NXP mentioned a BSP , Board support Package which configures some things(I think) but I found nothing..
I've mentioned this earlier In this thread.
my transfer(txbuf,rxbuf,cnt) isn't hanging any more. I still get lots of errors asking for 40mhz (with 160mhz flexio clock) and there is interframe gap. No gaps asking for 30mhz and ALMOST NO ERRORS. I'm actually getting one error on ALL transfers! txbuf[0] is being over-written ?????? I reset at the start of each iteration.
rxbuf sits in front of txbuf, so i'm assuming rxbuf is overflowing getting one more byte than it should.rxbuf sits in front of txbuf, so i'm assuming rxbuf is overflowing getting one more byte than it should.Code://();
Last edited by manitou; 02-05-2019 at 01:45 PM.
Good question Frank. I really don't know on this, especially about the NIC-301 chapter. I did try looking through the SDK but didn't see anything about it. Maybe Mantiou has? He's probably spent the most time studying the SDK. I've tried to do as little as possible with the SDK...
On the OCRAM, the FLEXRAM_TCM_CTRL register is where the wait states are configured. As nearly as I can tell, that's only for ITCM & DTCM access, and the default is supposed to be single cycle. I haven't tried fiddling with it.
So far, I've also not put any significant working into reverse engineering NXP's ROM. Maybe someday....
Just got USB host working! Or at least 1 USB MIDI drumpad is working. Much optimization work & testing still needed. Still using all DTCM.
Turns out we've had a subtle bug in attachInterruptVector() all this time. Committed a fix.
Going to package up a beta9 installer with all the latest audio from Frank and USB host.
there is debug led code in input i2s - please delete that before making a new installer
AttachInterrupt: Is :: memory sufficiant? I'd think a dmb or dsb may be needed, too.
We can move the vectortable to ITCM -manitou tested this and it was the same speed. This would free someRAM for variables/stack.
To make reliable, may almost have to do something like if the speed > X and rxbuf
while (cnt--) *rxbuf++ = transfer(*txbuf++);
Which will put gaps in... Or maybe I need to update the transfer gap time... (i.e. slow it down) ...
Do you have a version of the test you would like me to use. Or I can obviously hack one up...
Just spent the last couple hours looking at generated assembly. It was purely a compiler memory access reordering problem.Just spent the last couple hours looking at generated assembly. It was purely a compiler memory access reordering problem.AttachInterrupt: Is :: memory sufficiant? I'd think a dmb or dsb may be needed, too.
Since the vector table is in DTCM, shouldn't need those.
Let's talk of this for beta10. I'm on my 3rd rebuild of beta9. Only going to do a 4th build if something is very wrong.Let's talk of this for beta10. I'm on my 3rd rebuild of beta9. Only going to do a 4th build if something is very wrong.We can move the vectortable to ITCM -manitou tested this and it was the same speed. This would free someRAM for variables/stack.
Ok, I've uploaded beta9 to msg #2.
Beta9 has all of Frank's audio stuff, and USB host.
A 4th rebuild was needed to fix a silly mistake (affecting only Teensy 3.6). Hopefully I didn't make more of those...
AudioShield to T4 Connections
Ok - getting my self a little confused on pins to the audio
SPI(MOSI, MISO, SCLK, SDCS) => 11, 12, 13, 10. This I got.
VOL => 15
MCLK => 23 (SAI1)
LRCLK => 21
BCLK => 20
RX => 7
TX => 6
Just need a sanity check here. Reason I am asking is I am putting my own breakout board together that will mate with some of the shields that i have (need a break from coding for a bit). I have a 5v reg on boad going to 5v pin, but I didn't notice a trace to cut between usb/5v pin?
SPI, FlexSPI and SPISettings
Currently I can not use the SPISettings that are part of the SPIClass code in the SPIFlex code as the code internal to it is very specific to the SPI object itself.
However I could use it, if I added members to the SPISettings, that allows me to query the information like speed, MSB/LSB, ...
The version I have for SPI Flex only holds the data and allows me to retrieve it. Wondering about adding that to SPI as well so at minimum could pass in SPISettings and get the info. But SPISettings would probably still generate data not needed...
I have a hacked up version of the SPISettings that does this... I have not tried using it yet in the Flex code, will later.
Parts added shown in RED.Parts added shown in RED.Code:class SPISettings { public: SPISettings(uint32_t clock, uint8_t bitOrder, uint8_t dataMode) : _clock(clock), _bitOrder(bitOrder), _dataMode(dataMode) { if (__builtin_constant_p(clock)) { init_AlwaysInline(clock, bitOrder, dataMode); } else { init_MightInline(clock, bitOrder, dataMode); } } SPISettings() { init_AlwaysInline(4000000, MSBFIRST, SPI_MODE0); } uint32_t inline clock() {return _clock;} uint8_t inline bitOrder() {return _bitOrder;} uint8_t inline dataMode() {return _dataMode;}__)) { // TODO: Need to check timings as related to chip selects? const uint32_t clk_sel[4] = {664615384, // PLL3 PFD1 720000000, // PLL3 PFD0 528000000, // PLL2 396000000}; // PLL2 PFD2 uint32_t cbcmr = CCM_CBCMR; uint32_t clkhz = clk_sel[(cbcmr >> 4) & 0x03] / (((cbcmr >> 26 ) & 0x07 ) + 1); // LPSPI peripheral clock uint32_t d, div; if (clock == 0) clock =1; d= clkhz/clock; if (d && clkhz/d > clock) d++; if (d > 257) d= 257; // max div if (d > 2) { div = d-2; } else { div =0; } ccr = LPSPI_CCR_SCKDIV(div) | LPSPI_CCR_DBT(div/2); tcr = LPSPI_TCR_FRAMESZ(7); // TCR has polarity and bit order too // handle LSB setup if (bitOrder == LSBFIRST) tcr |= LPSPI_TCR_LSBF; // Handle Data Mode if (dataMode & 0x08) tcr |= LPSPI_TCR_CPOL; // Note: On T3.2 when we set CPHA it also updated the timing. It moved the // PCS to SCK Delay Prescaler into the After SCK Delay Prescaler if (dataMode & 0x04) tcr |= LPSPI_TCR_CPHA; } uint32_t ccr; // clock config, pg 2660 (RT1050 ref, rev 2) uint32_t tcr; // transmit command, pg 2664 (RT1050 ref, rev 2) uint32_t _clock; uint8_t _bitOrder; uint8_t _dataMode; friend class SPIClass; };
But in addition to this, wondering if we could just go ahead and move the smarts of the init_AlwaysInline... into the SPIClass::beginTransaction?
Reason I ask this, is I believe with this current code, with the looking at contents of CCM_CBCMR it will either a) always have to compute all of this data or maybe be wrong...
That is suppose your code has a static one that is used... like:
SPISettings mySettings(8000000, MSBFIRST, SPI_MODE0);
What values will it get for CCM_CBCMR.
And I suspect that if I do something like:
SPI.beginTransaction(SPISettings(8000000, MSBFIRST, SPI_MODE0));
Even though all of the inputs are constants, it will not e able to compile this down to ccr and tcr being precomputed values. That is it will always run this code...
So again wondering if in T4, we simply move all of this compute code into SPI.beginTransaction and use real simple version of SPISettings (more or less just the RED stuff?
Thoughts?
If you remove the T4 and audio shield, and power it up with external 3.3V, you can check the signal paths with an ohm-meter. The switch controlls five 74LVC1G3157 analog switches, which should measure approx 6 ohms.
For I2S2, MCLK is pin 33 on the bottom side.
EDIT: for anyone who doesn't have the breakout board and wants try audio or USB host, email me directly. I have more parts coming Friday, so we should have 8 more of those available next week.
Code:#include <FlexIO_t4.h> #include <FlexSPI.h> #define SPIHZ 30000000 //#define HARDWARE_CS #ifdef HARDWARE_CS FlexSPI SPIFLEX(2, 3, 4, 5); // Setup on (int mosiPin, int misoPin, int sckPin, int csPin=-1) : #define assert_cs() #define release_cs() #else FlexSPI SPIFLEX(2, 3, 4, -1); // Setup on (int mosiPin, int misoPin, int sckPin, int csPin=-1) : #define assert_cs() digitalWriteFast(5, LOW) #define release_cs() digitalWriteFast(5, HIGH) #endif void setup() { pinMode(13, OUTPUT); while (!Serial && millis() < 4000); Serial.begin(115200); delay(500); #ifndef HARDWARE_CS pinMode(5, OUTPUT); release_cs(); #endif SPIFLEX.begin(); // See if we can update the speed... SPIFLEX.flexIOHandler()->setClockSettings(3, 2, 0); // clksel(0-3PLL4, Pll3 PFD2 PLL5, *PLL3_sw) Serial.printf("Updated Flex IO speed: %u\n", SPIFLEX.flexIOHandler()->computeClockRate()); Serial.printf("SPIHZ %d\n", SPIHZ); } uint8_t txbuf[1024], rxbuf[1024]; void loop() { SPIFLEX.beginTransaction(FlexSPISettings(SPIHZ, MSBFIRST, SPI_MODE0)); #if 1 assert_cs(); uint32_t t = micros(); SPIFLEX.transfer(txbuf, NULL, sizeof(txbuf)); t = micros() - t; Serial.printf("%d us %.1f mbs\n", t, 8.*sizeof(txbuf) / t); release_cs(); #endif //(); SPIFLEX.endTransaction(); int errs = 0; for (int i = 0; i < sizeof(txbuf); i++) if (txbuf[i] != rxbuf[i]) errs++; Serial.printf("errs %d [3] %d\n", errs, rxbuf[3]); if (errs) { for (int i = 0; i <= 4; i++) Serial.printf("%d %d %d\n", i, txbuf[i], rxbuf[i]); for (int i = 500; i <= 504; i++) Serial.printf("%d %d %d\n", i, txbuf[i], rxbuf[i]); for (int i = 1020; i <= 1023; i++) Serial.printf("%d %d %d\n", i, txbuf[i], rxbuf[i]); } delay(500); }
Kurt let me know when you have updated FlexSPI for TeensyView/ssd1306 and your 64pix tall as you tested that works with Beta9 release to run both SPI's as you did.
Does this look like your SPI 1306 unit?
If so can you confirm wiring { SDA==MOSI and SCL==CLK ? }If so can you confirm wiring { SDA==MOSI and SCL==CLK ? }
I wondered if the bus design might result in PROGMEM speed oddity:surprised it would be faster than the OCRAM....… OCRAM slower than PROGMEM …
Dcache on
11676 us stack
11732 us malloc
11676 us progmem
D cache off
11676 us stack
1132899 us malloc
80016 us progmem
USBHOST_T36
I just downloaded the latest core and the usbhost_t36 libraries. I hooked up my PS4 joystick and it correctly identifies the joystick, the manufacturer etc. A couple things since I never used it before:
The buttons on the right with the symbols do trip and I can see the values, the bumble/rumble works for both left and right as well as the buttons above the bumble levers.
What I can't really tell is working or not but probably is are the arrow buttons and joysticks, only because I see a whole lot of data flying across the screen and even if set show only changed values it still keeps streaming. Together with me not know the functions or what printed makes it confusing. Going to go through it later and figure it out - is there a reference page somewhere? | https://forum.pjrc.com/threads/54711-Teensy-4-0-First-Beta-Test/page61?s=76466497ac95f296276deed556e16f49 | CC-MAIN-2020-05 | refinedweb | 3,554 | 73.37 |
Source
JythonBook / appendixA.rst
Appendix A: Using Other Tools with Jython
The primary focus of this appendix is to provide information on using some external Python packages with Jython, as well as providing information regarding the Jython registry. In some circumstances, the tools must be used or installed a bit differently on Jython than on CPython, and those differences will be noted. Because there is a good deal of documentation on the usage of these tools available on the web, this appendix will focus on using the tool specifically with Jython. However, relevant URLs will be cited for finding more documentation on each of the topics.
The Jython Registry
Because there is no good platform-independent equivalent of the Windows Registry or Unix environment variables, Java has its own environment variable namespace. Jython acquires its namespace from the following sources (later sources override defaults found in earlier places):
- The Java system properties, typically passed in on the command line as options to the java interpreter.
- The Jython “registry” file, containing prop=value pairs. Read on for the algorithm Jython uses to find the registry file.
- The user’s personal registry file, containing similarly formatted informative messages. Valid values in order of increasing verbosity are “error,” “warning,” “message,” “comment,” and “debug.”
python.security.respectJavaAccessibility
Normally, Jython can only provide access to public members of classes. However if this property is set to false and you are using Java 1.2, then Jython can access nonpublic includes exists, it, and sys.path has rootdir/Lib appended to it. The registry file used is then rootdir/registry.
Setuptools
Setuptools is a library that builds upon distutils, the standard Python distribution facility. It offers some advanced tools like easy_install, a command to automatically download and install a given Python package and its dependencies.
To get setuptools, download ez_setup.py from. Then, go to the directory where you left the downloaded file and execute:
$ jython ez_setup.py
The output will be similar to the following:
Downloading Processing setuptools-0.6c9-py2.5.egg Copying setuptools-0.6c9-py2.5.egg to /home/lsoto/jython2.5.0/Lib/site-packages Adding setuptools 0.6c9 to easy-install.pth file Installing easy_install script to /home/lsoto/jython2.5.0/bin Installing easy_install-2.5 script to /home/lsoto/jython2.5.0/bin Installed /home/lsoto/jython2.5.0/Lib/site-packages/setuptools-0.6c9-py2.5.egg Processing dependencies for setuptools==0.6c9 Finished processing dependencies for setuptools==0.6c9
As you can read on the output, the easy_install script has been installed to the bin directory of the Jython installation (/home/lsoto/jython2.5.0/bin in the example above). If you work frequently with Jython, it’s a good idea to prepend this directory to the PATH environment variable, so you don’t have to type the whole path each time you want to use easy_install or other scripts installed to this directory. From now on, we’ll assume that this is the case. If you don’t want to prepend Jython’s bin directory to your PATH for any reason, remember to type the complete path on each example (i.e., type /path/to/jython/bin/easy_install when I say easy_install).
Okay, so now you have easy_install. What’s next? Let’s grab a Python library with it! For example, let’s say that we need to access Twitter from a program written in Jython, and we want to use python-twitter project, located at.
Without easy_install, you would go to that URL, read the building instructions and, after downloading the latest version and executing a few commands, you should be ready to go. Except that libraries often depend on other libraries (as the case with python-twitter which depends on simplejson) so you would have to repeat this boring process a few times.
With easy_install you simply run:
$ easy_install python-twitter
And you get the following output:
Searching for python-twitter Reading Reading Best match: python-twitter 0.6 Downloading Processing python-twitter-0.6.tar.gz Processing dependencies for python-twitter Searching for simplejson Reading Reading Best match: simplejson 2.0.9 Downloading Finished processing dependencies for python-twitter
The output is a bit verbose, but it gives you a detailed idea of the steps automated by easy_install. Let’s review it piece by piece:
Searching for python-twitter Reading Reading Best match: python-twitter 0.6 Downloading
We asked for “python-twitter,” which is looked up on PyPI, the Python Package Index, which lists all the Python packages produced by the community (as long as they have been registered by the author, which is usually the case). The version 0.6 was selected since it was the most recent version at the time we ran the command.
Let’s see what's next on the easy_install output:
Nothing special here: it ran the needed commands to install the library. The next bits are more interesting:
Processing dependencies for python-twitter Searching for simplejson Reading Reading Best match: simplejson 2.0.9 Downloading
As you can see, the dependency on simplejson was discovered and, since it is not already installed it is being downloaded. Next we see:
The warnings are produced because the simplejson installation tries to compile a C extension which for obvious reasons only works with CPython and not with Jython.
Finally, we see:
Finished processing dependencies for python-twitter
Which signals the end of the automated installation process for python-twitter. You can test that it was successfully installed by running Jython and doing an import twitter on the interactive interpreter.
As noted above, easy_install will try to get the latest version for the library you specify. If you want a particular version, for example the 0.5 release of python-twitter then you can specify it in this way:
$ easy_install python-twitter==0.5
If new versions of python-twitter are released later, you can tell easy_install to upgrade it to the latest available version, by using the flag:
$ easy_install -U python-twitter
For debugging purposes, it is always useful to know where the bits installed using easy_install go. As you can stop of the install output, they are installed into <path-to-jython>/Lib/site-packages/<name_of_library>-<version>.egg which may be a directory or a compressed zip file. Also, easy_install adds an entry to the file <path-to-jython>/Lib/site-packages/easy-install.pth, which ends up adding the directory or zip file to sys.path by default.
Unfortunately, setuptools don’t provide any automated way to uninstall packages. You will have to manually delete the package egg directory or zip file and remove the associated line on easy-install.pth.
Virtualenv
Often, for the possibility of using such tools as virtualenv.
To use virtualenv with Jython, we first need to obtain it. The easiest way to do so is via the Python Package Index. As you had learned in the previous section, easy_install is the way to install packages from the PyPI. The following example shows how to install virtualenv using easy_install with Jython.
jython easy_install.py virtualenv
Once installed, it is quite easy to use the tool for creation of a virtual environment. The virtual environment will include a Jython executable along with an installation of setuptools and its own site-packages directory. This was done so that you have the ability to install different packages from the PyPI to your virtual environment. Let’s create an environment named JY2.5.1Env using the virtualenv.py module that exists within our Jython environment.
jython <<path to Jython>>/jython2.5.1/Lib/site-packages/virtualenv-1.3.3-py2.5.egg/virtualenv.py JY2.5.1Env New jython executable in JY2.5.1Env/bin/jython Installing setuptools............done. designate it to be used for our virtual Jython environment exclusively via the use of the None command. To do so, open up a terminal and type the following:
source <<path-to-virtual-environment>>/JY2.5.1Env/bin/activate
Once this is done, you should notice that the command line is preceded by the name of the virtual environment that you have activated. Any Jython shell or tool used in this terminal will now be using the virtual environment. This is an excellent way to run a tool using two different versions of a particular library or for running a production and development environment side-by-side. If you run the easy_install.py tool within the activated virtual environment terminal then the tool(s) will be installed into the virtual environment. There can be an unlimited number of virtual environments installed on a particular machine. To stop using the virtual environment within the terminal, simply type:
deactivate libraries into virtual environments.
It is useful to have the ability to list installations that are in use within a particular environment. One way to do this is to install the None utility and make use of its None command. Such information may also be useful for purposes such as documentation of the dependencies contained in your setup.py.
In order to install None, you must grab a copy of the latest version of Jython beyond 2.5.1 as there has been a patch submitted that corrects some functionality which is used by None. You must also be running with JDK 1.6 or above as the patched version of Jython makes use of the None module. The None module makes use of some java.awt.Desktop features that are only available in JDK 1.6 and beyond. To install None, use the ez_install.py script as we’ve shown previously.
jython ez_install.py yolk
Once installed, you can list the package installations for your Jython installations by issuing the None command as follows:
yolk -l Django - 1.0.2-final - non-active development (/jython2.5.1/Lib/site-packages) Django - 1.0.3 - active development (/jython2.5.1/Lib/site-packages/Django-1.0.3-py2.5.egg) Django - 1.1 - non-active development (/jython2.5.1/Lib/site-packages) SQLAlchemy - 0.5.4p2 - active development (/jython2.5.1/Lib/site-packages) SQLAlchemy - 0.6beta1 - non-active development (/jython2.5.1/Lib/site-packages) django-jython - 0.9 - active development (/jython2.5.1/Lib/site-packages/django_jython-0.9-py2.5.egg) django-jython - 1.0b1 - non-active development (/jython2.5.1/Lib/site-packages) nose - 0.11.1 - active development (/jython2.5.1/Lib/site-packages/nose-0.11.1-py2.5.egg) setuptools - 0.6c9 - active setuptools - 0.6c9 - active snakefight - 0.4 - active development (/jython2.5.1/Lib/site-packages/snakefight-0.4-py2.5.egg) virtualenv - 1.3.3 - active development (/jython2.5.1/Lib/site-packages/virtualenv-1.3.3-py2.5.egg) wsgiref - 0.1.2 - active development (/jython2.5.1/Lib) yolk - 0.4.1 - active
As you can see, all installed packages will be listed. If you are using yolk from within a virtual environment then you will see all packages installed in that virtual environment as well as those installed into the global environment.
Similarly to setuptools, there is no way to automatically uninstall virtualenv. You must also manually delete the package egg directory or zip file as well as remove references within easy-install.pth. | https://bitbucket.org/javajuneau/jythonbook/src/28b0486ae6c10c366f4da396f96f2ca91b228ba2/appendixA.rst?at=default | CC-MAIN-2015-18 | refinedweb | 1,878 | 50.84 |
Using Appropriate Status Codes With Each API Response
For a long time, I have thought about API request failures as falling into just two distinct categories: failure to communicate (ie. the server was down) or bad data (ie. invalid parameters). Failures to communicate with the server were out of my hands; as such, there was nothing I could do with those from a server standpoint. Requests with bad data, on the other hand, were certainly something within my domain of control and, happened to be something that I had strong feelings about.
Much of what I believe about API responses comes from my experience with SOAP-based web services. If you look at the SOAP request / response life cycle, you'll notice that SOAP responses always return a 200 status code, even when the request is invalid. Granted, the response might contain a SOAP fault (error) structure; but, from an HTTP standpoint, the request was successful.
I took this SOAP approach and extended it to my non-SOAP APIs. Typically, in my API architecture, the responses returned from the server always contain a 200 status code. Even if the request happens to be invalid, any error information would be contained in the body of a 200 response. This has been working well for me; but a few months ago, in a post about handling AJAX errors with jQuery, Simon Gaeremynck suggested that I use more appropriate status codes to describe the API response.
When using a variety of status codes in jQuery, only 200 responses will be handled by the success callback function; all other responses - 400, 404, 500, etc. - will be handled by the error callback function. I have to say, there is definitely something very delicious about having the success callback function handle only successful requests; I think it would keep the success work flow much cleaner. Definitely, this is something worth exploring; and, while it may have taken me a few months to get around to it, I think I like what I am seeing.
To test this, I set up a simple ColdFusion API page. For demo purposes, the API will do nothing but return a Girl object with an ID of 4. If the ID is not passed in, I will return a Bad Request response (status code 400). If the ID is passed in, but is not 4, I'll return a Not Found response (status code 404). And, if there is an unexpected error, I'll return an Internal Server Error (status code 500).
api.cfm
- <!---
- Set up a default response object. This is not the object that
- we are going to send back to the client - it is just an
- internal value object that we will use to prepare the response.
- --->
- <cfset apiResponse = {
- statusCode = "200",
- statusText = "OK",
- data = ""
- } />
- <!---
- We're going to wrap the entire processing algorithm in a try /
- catch block so that we can catch any request and processing
- errors and return the appropriate response.
- --->
- <cftry>
- <!--- Try to param the URL varaibles. --->
- <cftry>
- <!--- Param the URL paramters for the request. --->
- <cfparam name="url.id" type="numeric" />
- <!--- Catch any malformed request errors. --->
- <cfcatch>
- <!--- Throw a local error for malformed request. --->
- <cfthrow
- type="BadRequest"
- message="The required parameter [ID] was not provided, or was not a valid numeric value."
- />
- </cfcatch>
- </cftry>
- <!--- ------------------------------------------------- --->
- <!--- ------------------------------------------------- --->
- <!---
- Check to see if the given ID is a valid ID. For demo
- purposes, we are going to only allow the ID - 4.
- --->
- <cfif (url.id neq 4)>
- <!---
- The given ID does not correspond to a valid value in
- our "database". Throw an item not found error.
- --->
- <cfthrow
- type="NotFound"
- message="The requested record with ID [#url.id#] could not be found."
- />
- </cfif>
- <!--- ------------------------------------------------- --->
- <!--- ------------------------------------------------- --->
- <!---
- If we made it this far, the request is in a valid format
- and the parameters are accurate. Let's set up the response
- values (for our demo, we are going to pretend that we are
- pulling out of a database).
- --->
- <cfset apiResponse.data = {
- id = url.id,
- name = "Joanna"
- } />
- <!--- ------------------------------------------------- --->
- <!--- ------------------------------------------------- --->
- <!---
- If there was a problem here, the request itself was
- malformed (meaning, either the require parameters were
- not sent, or they were not valid). This should result in
- a status code of "400 Bad Request".
- --->
- <cfcatch type="BadRequest">
- <!---
- Since this was a malformed request, let's set the
- status code to be a 400.
- --->
- <cfset apiResponse.
- <cfset apiResponse.
- <!--- Set the data to be an array of error message. --->
- <cfset apiResponse.data = [ cfcatch.message ] />
- </cfcatch>
- <!---
- If the record could not be found, the parameters were
- correctly formatted, but were not accurate. This should
- result in a status code of "404 Not Found".
- --->
- <cfcatch type="NotFound">
- <!---
- Since this request did not point to a valid record,
- let's set the status code to be a 404.
- --->
- <cfset apiResponse.
- <cfset apiResponse.
- <!--- Set the data to be an array of error message. --->
- <cfset apiResponse.data = [ cfcatch.message ] />
- </cfcatch>
- <!---
- If we are catching an error here, it means that an
- unexpected exception has been raised. This should result
- in a status code of "500 Internal Server Error".
- --->
- <cfcatch>
- <!---
- Since something unexpected went wrong, let's set the
- status code to be a 500.
- --->
- <cfset apiResponse.
- <cfset apiResponse.
- <!--- Set the data to be an array of error message. --->
- <cfset apiResponse.data = [ cfcatch.message ] />
- </cfcatch>
- </cftry>
- <!---
- At this point, we have processed the request (either
- successfully or unsuccessfuly); we now have to return a
- value to the client. First, let's serialize the response.
- --->
- <cfset responseString = serializeJSON( apiResponse.data ) />
- <!--- Convert the response to binary for streaming. --->
- <cfset responseBinary = toBinary( toBase64( responseString ) ) />
- <!--- Set the status code and text based on the processing. --->
- <cfheader
- statuscode="#apiResponse.statusCode#"
- statustext="#apiResponse.statusText#"
- />
- <!---
- Set the content length so the client knows how much data
- to expect back.
- --->
- <cfheader
- name="content-length"
- value="#arrayLen( responseBinary )#"
- />
- <!--- Stream the content back to the client. --->
- <cfcontent
- type="application/x-json"
- variable="#responseBinary#"
- />
As you can see, I create an initial response data object, apiResponse. This is not the object that I end up streaming back to the client; rather, it is just an object that I use to help define my API response. Ultimately, I am only returning "data" with my response - I use the status code and the status text to define the response headers. To determine those response headers, I am simply using a local error handling work flow to throw and catch errors as needed.
With that API in place, I then set up a simple jQuery test page that would make various requests to the API with a variety of data values. Each data value should result in a different type of response (ie. 200, 400, 404).
- <!DOCTYPE HTML>
- <html>
- <head>
- <title>Using Appropriate API Status Codes</title>
- <style type="text/css">
- #output {
- border: 1px solid #999999 ;
- padding: 10px 10px 10px 10px ;
- }
- #output p {
- margin: 3px 0px 3px 0px ;
- }
- </style>
- <script type="text/javascript" src="jquery-1.4.2.js"></script>
- <script type="text/javascript">
- // When the DOM is ready, initialize the scripts.
- jQuery(function( $ ){
- // Get the link references.
- var badRequestLink = $( "a[ rel = '400' ]" );
- var inaccurateRequestLink = $( "a[ rel = '404' ]" );
- var goodRequestLink = $( "a[ rel = '200' ]" );
- // Get our output reference.
- var output = $( "#output" );
- // This is the function that will handle all of the
- // AJAX requests.
- var makeAPIRequest = function( data ){
- // Make the API call with the given data.
- $.ajax({
- type: "get",
- url: "./api.cfm",
- data: data,
- dataType: "json",
- // This method will handle 200 responses only!
- success: function( response ){
- // Show the successful response.
- showSuccess( response );
- },
- // This method will handle all non-200
- // reponses. This will include 400, 404, and
- // 500 status codes.
- error: function( xhr, errorType ){
- // Check to see if the type of error is
- // "error". If so, then it's an error
- // thrown by our server (if it is a
- // "timeout", then the error is in the
- // commuication itself).
- //
- // NOTE: Because this is an error, jQuery
- // did NOT parse the JSON response; as
- // such, we have to do that manually.
- if (errorType == "error"){
- // Show the error.
- showError(
- xhr.status,
- xhr.statusText,
- $.parseJSON( xhr.responseText )
- );
- }
- }
- });
- };
- // I show error responses.
- var showError = function( statusCode, statusText, errors ){
- output.html(
- ("<p>StatusCode: " + statusCode + "</p>") +
- ("<p>StatusText: " + statusText + "</p>") +
- ("<p>Errors: " + errors.join( ", " ) + "</p>")
- );
- };
- // I show success responses.
- var showSuccess = function( girl ){
- output.html(
- ("<p>ID: " + girl.ID + "</p>") +
- ("<p>Name: " + girl.NAME + "</p>")
- );
- };
- // Bind the bad request
- badRequestLink.click(
- function( event ){
- // Prevent the default action (location).
- event.preventDefault();
- // Make the API call without any data.
- makeAPIRequest( {} );
- }
- );
- // Bind the inaccurate request
- inaccurateRequestLink.click(
- function( event ){
- // Prevent the default action (location).
- event.preventDefault();
- // Make the API call with inaccurate data.
- makeAPIRequest( { id: 1 } );
- }
- );
- // Bind the good request
- goodRequestLink.click(
- function( event ){
- // Prevent the default action (location).
- event.preventDefault();
- // Make the API call with good data.
- makeAPIRequest( { id: 4 } );
- }
- );
- });
- </script>
- </head>
- <body>
- <h1>
- Using Appropriate API Status Codes
- </h1>
- <p>
- Make a <a href="#" rel="400">Bad Request</a>.
- </p>
- <p>
- Make an <a href="#" rel="404">Inaccurate Request</a>.
- </p>
- <p>
- Make a <a href="#" rel="200">Good Request</a>.
- </p>
- <div id="output">
- <em>No response yet.</em>
- </div>
- </body>
- </html>
As you can see, each of the three links - Bad Request, Inaccurate Request, Good Request - triggers an AJAX request to the API. These AJAX requests define both a success and an error callback handler. The success callback handles the 200 responses only. The error callback, on the other hand, will handle our 400 and 404 responses (and 500, which I am not demoing). Unfortunately, jQuery does not pass any response data to the error callback handler. As such, we need to manually gather the responseText from the XMLHTTPRequest object and use jQuery to parse it into valid Javascript objects for us.
As someone who was adamantly against this type of approach in the past, I have to admit that there is something about this that I find very appealing. It uses the callback handlers in a way that feels much more intentful; in fact, the whole architecture, both server-side and client-side, feels much more intentful. I think Simon was right - this is a cleaner approach to handling errors in AJAX. Thanks Sim
Good to see you came to the light side ;)
As far as jQuery is concerned, I agree that it isn't very consistent, but if you keep it in mind it's not a big concern (IMHO).
I also find that an API gets much cleaner this way.
Glad to help,
Simon
Very nice demo! The only thing is the returned message by the server is never .. user-friendly. It's why I never use the server message and I change it for a custom message.
Nice job!
@Simon,
Thanks for the help :) I agree - the API does feel cleaner this way; and, when something has more conscious intent behind it, I typically feel it to be the more appropriate approach.
Also, I just linked your full name - your original comments didn't have your URL.
Use this HTTP map to articulate what you want to say.
Very comprehensive :)
ps. ... and start documenting corMVC it looks awesome.
@Stephan,
Agreed! That's why I didn't rely on the status text; rather, I passed back JSON data that I had to manually parse into a Javascript object. This way, even with something like a 400 (Bad Request), I could still theoretically return an array of errors for something like form validation..
Do these status codes get written into the web server logs?
@dotnetCarpenter,
I'll definitely check that out... and I'll start working on doing something more with corMVC :)
@Ryan,
Thanks, I'll check that out as well.
@Marc,
Any logging, you'll have to do manually. Since no error actually bubbles up to the application - all catching is done locally - there is no exception from the application view point. If you need to be logging things like invalid API calls, that just needs to be part of your business logic (I assume).
Probably, you'll want to create a more CFC-based API that can have core methods for logging and error problems...
Marc
@Marc,
I'm not too learned in server logs; but, as far as I know, only uncaught errors are logged. Since these are being caught as part of the API work flow within the application, I don't believe the server will log them. But that is my hypothesis - not fact.
Ben, first nice post.
I have done the same thing for years, but I actually never thought about returning status codes. I would a struct method that would define ok or failed, then it would be upto the client to then decide from there what to do.
I might have a rethink, as this does seem a bit easier than trying to pick a number and have that define what the problem is.
@Marc,
You should never *generate* 500's in your application on purpose. Your application should always generate 200's, 300's, or 400's. If your application fails in a spectacular way, and cannot recover from failure, then *your appserver (CF) or webserver (Apache)* will respond with a 500.
200's: the request was clean and the desired output is contained within the current HTTP response.
300's: the request was clean and the desired output is *not* contained within the current HTTP response, but is at some other location.
400's: the request was erroneous and could not be completed. Reasons include: bad input data, bad input format, not authenticated, not authorized, etc.
500's: the server screwed up royally. There is no reason for your application to generate this. If you are writing a framework for hosting applications, and an application written for the framework screws up royally, then the framework might return this status. Otherwise, CF or Apache might return this status if there was an unhandled exception, if CF was itself buggy, etc. If your application is generating 500's on purpose, what that means to me is that you need to back to your application and rewrite it so that it starts handling errors in a sane fashion - e.g., respond instead with a 400 if the input data was bad, and log the error behind the scenes.
Cheers,
Justice
@Justice,
That's a really good point. In my code, I basically assumed that the default catch would return a 500 since I had no idea what might be throwing that particular error. But, the biggest problem is that if my app does return a 500 error explicitly AND the server might return a 500 error on its own (critical error), then the client needs to understand potentially two different flavors of 500 error... which is not good.
Point well taken, thanks.
course!!) to use one layer's error handling (the web service) in another layer's application (the web application server, ColdFusion.
My question was basically the same -- if your app is throwing web-server-like errors (especially 500!!), and they show up in the web server logs, how will you know the difference between a problem with your web server and a (handled) error by your app?
I don't have a complete grip on the whole concept here and my opinion could be (very easily) swayed, but the 500 caught my eye first.
@Ben, @Marc,
Generally speaking, if you catch an exception and recover from it and handle it in some fashion, that typically means the server did not experience a catastrophic error. It means the user did something stupid, tricky, or otherwise not allowed for some reason or other. The HTTP spec dictates that when the user is at fault, the response status should be in the 400's.
But, to take your scenarios, if you have a global catch which logs an error and rethrows or which logs an error and displays a nifty server-error page, you actually should respond with a 500. That falls under the "framework" case that I outlined above. In a global catch, it's not entirely clear that the web application, or at least this particular request/response, "recovered" from the error well enough to continue processing. Here, the server is actually at fault, not the user. And when the server is at fault, the HTTP spec says to respond with a 500's status.
You can certainly make a distinction between exceptions caught in a global catch and uncaught exceptions by setting an X-Header in your response. An X-Header is any header that begins with "X-". The HTTP spec says that the application is permitted to set any such header in any way it feels like. If you catch an exception in a global catch, feel free to set a header like <cfheader name="X-Exception-Interception" value="app-global-catch">. To HTTP, it is meaningless - but you can still see the X-Header in Firebug and you can still access it with jQuery.
Cheers,
Justice
@Justice,
That's interesting about the "X-" headers; I had not heard that before (my understanding of headers is limited).
@Marc, @Justice,
Just to get on the same page about some of the terms we are using, when we set a Header value, we're not actually throwing an exception. Granted, in my example, I am throwing an error to catch an error; but, the simple act of setting a header is *not* the same as raising an exception.
I can agree with you guys that using a 500 is probably not appropriate as @Justice says, since it's not a user-initiated problem. But, I can't see any issues with returning 400's errors. Also, keep in mind that I am never throwing a 400's error - I'm throwing custom errors that are being trapped and then later translated into header values.
@Andrew,
Yeah, exactly, I used to do it that way as well - all of my API responses had:
- Success (true / false)
- Errors (arary)
- Data (anything)
... and then the client had to figure out what to do based on the Sucecss / Errors properties.
But, there's something I really like about this status-code based approach since it seems more inline with what is actually going on from the client's point of view.
, change status codes of the response.
In our particular case, we add an additional layer to web facing APIs which are "API Managers". This is what third parties interact with and what changes the headers/sends errors (dumbs things down for third party developers and implements one or more interfaces). Based upon the error caught, the manager changes the response status/headers and also send information about the issue via XML/JSON.
If you use jQuery's ajax function rather than post/get you can really do some great error catching.
@Rocky,
So, 500 error aside (as I think we are all agreeing that this should not be programmatically generated based on user input), are we basically agreeing? It sounds like you are using your API managers to send back a variety of 400 errors based on the user input, which is what I was exploring?
really slick once it's done.
Our server side application stack looks something like:
API Library
System level APIs: Application agnostic. Much like the java package in Java or system namespace in .Net. We moved away from UDFs and custom tags, instead we use these system API libraries we have developed that are shared across many applications. For instance, we have a text datatype that, once initialized, can perform many UDF type functions via methods. Ex: system.security.Cryptography or system.datatype.simple.Text
Application level APIs: Application specific APIs which utilize the system level APIs. This is where the heavy lifting and core business rules are housed. Ex: us.co.k12.thompson.sis.user.User or us.co.k12.thompson.sis.user.UserFactory
Remote API Managers: When we want to expose functionality within the application level APIs, we add another layer to simplify things for third-parties, convert errors that would normally be handled in our Web Framework to what I was speak to above.
Web Applications
Core Web Framework: Custom developed MVC framework that provides basic security, variables, caching, and other core functionality/structure. The framework is based on serving "files", not just web pages. It can just as easily deliver jpg, pdf, xls, as it would xhtml. None of the code is exposed directly to the web, just a single cfm file that reroutes the request to the handlers (for extra security). Again, uses 404 handling to mimic any file request that was made that wasn't handled by the web server. The framework utilizes the system level APIs and does some caching of the APIs for performance (most of them can be cached in the application scope).
Request handlers (i.e. webpages in most cases): What most people code when they develop CF. These utilize the system and application level APIs.. Since the core logic is tied up in the APIs, I can deliver an application functionality to mobile users, standard web users, even third parties mush easier. Just change the presentation layer stuff and you're all set! :)
Boy, I am way off topic. Sorry about that!
@Rocky,
No worries re: going off topic. Sounds like a very interesting architecture you've got going on over there. From everything that people tell me, the more API-based your architecture gets, the more scalable and maintainable your application becomes. Hopefully, as I start to get more and more into OOP, that will happen naturally.
We have a similar setup as Rocky.
We have a backend which only exposes REST services.
It doesn't output any html, js, jsp, .. . (We are capable of running jsp, groovy, jsf, ruby, jython, etc.. but we really choose not to).
We also have a frontend which exists solely out of html/css and js.
Whenever something needs to happen that requires a call to the "database" it goes trough a REST api.
For example:
Take a blog post with some comments, and a user want's to comment on it.
All the frontend related code and markup (ie HTML, CSS, images and JavaScript) are served straight from Apache HTTPD (this is extremely fast).
When the HTML is loaded we execute some JS that retrieves all the "data" that needs to be filled in on the page from a REST service. This will be the blog post + comments.
The user types in his name, email, website and comment and clicks 'Post Comment'. This does an AJAX call to another REST service which adds the comment in the db. If the call was succesfull, we add the comment to the DOM of the page via JS.
A nice benefit of this, is that all these actions only require a single page load.
If we would want to run this on a mobile device we don't have to change anything on our backend since it is exposed via REST. EVERYTHING idea is great, especially for a web "app" that doesn't care about SEO... but for a public website, it seems that could be a limiting factor in SEO and indexing.
@Marc,
For our project it isn't really a requirement, since most (if not all) of our resources are private and related to the logged in user.
Anyway, IIRC google does execute javascript before indexing the page.
A good example (for us at least ;)) might be paging.
Imagine you have a blogpost which has ~500 comments. We show the first/last 50 and then load the following 50 via AJAX. No page reload.
It is up to the implementation to leave a URL that can later be found again.
Something along the lines of
example.com/blog/Using-Appropriate-Status-Codes-With-Each-API-Response.htm#50
Where the #50 resembles the starting point to display comments.
It's not really a clean way, but it is a way to place a URL in the navigation bar of the browser that the client can reliably copy and share.
@Simon,
Sounds awesome; that's the kind of architecture that I'm trying to learn more about. My recent series of "FLEX on jQuery" is meant to do just that. I am trying to learn more about FLEX so I can create richer, thicker clients that rely more on APIs rather than old-school request/response life cycles.
hello ben,
i have implemented above code in project but i m getting error "There was a problem with the API" can you help me
I think the API does feel cleaner this way; and, when something has more conscious intent behind it,
Thanks for this great post.
Note that this comment is not technically correct:
"When using a variety of status codes in jQuery, only 200 responses will be handled by the success callback function;"
jQuery considers a 2XX status or 304 status to be successful.
If you search for this string:
"if ( status >= 200 && status < 300 || status === 304 )"
in jQuery's source code:
you'll see how jQuery determines what's successful. This is important, as many APIs return 201 for newly created resources, and this should be considered a success. | http://www.bennadel.com/blog/1860-using-appropriate-status-codes-with-each-api-response.htm?_rewrite | CC-MAIN-2015-27 | refinedweb | 4,181 | 63.9 |
Hiding instruction on CISC CPUs
By nike on Jun 08, 2007
Today I'm somewhat playful, so for fun wrote a little proggy which fiddles with dynamic code generation and instruction hiding. Instruction stream for CISC CPU (like x86 amd amd64) isn't so well defined as it is for RISC CPUs. Thus games could be played with hiding of instructions in the instruction stream. Actually idea for this hack first came to my mind when I realized that we can jump in the middle of instruction on x86 CPU. In my example I hide instruction
xor edi, ediinside of machine code for
mov eax, imm. Moreover, this code really sets
eaxregsiter to the immediate value expected. To see the point, start the program, stop just before jump to the generated code (b jumper.c:53) and examine code in buffer (x /3i buf). You'll see:
0xb7f9e000: jmp 0xb7f9e003 0xb7f9e002: mov $0xfbebff31,%eax 0xb7f9e007: retNow stat executing code step by step with
sicommand (and ask gdb to show what happens with
display /i $pc). You'll see the magic in action. Also you may check that EDI value changed just by examining output.
#include <stdio.h> #include <sys/mman.h> #include <unistd.h> #include <errno.h> unsigned char\* make_exec_buf() { int ps = getpagesize(); unsigned char\* rv = (unsigned char\*)mmap(NULL, ps, PROT_EXEC|PROT_READ|PROT_WRITE, MAP_PRIVATE | MAP_ANON, -1, 0); if (rv == (unsigned char\*)-1) { perror("mmap"); return NULL; } return rv; } void write_magic(unsigned char\* buf) { #define PUT_BYTE(b) buf[idx++] = (unsigned char)b; #define PUT_INT(b) \*(int\*)(buf+idx) = b; idx += 4; int idx = 0; PUT_BYTE(0xeb); // jmp hidden PUT_BYTE(1); visible: PUT_BYTE(0xb8); // mov eax, imm32 hidden: PUT_BYTE(0x31); // xor edi, edi PUT_BYTE(0xff); PUT_BYTE(0xeb); // jmp visible, to make sure what should be don is done PUT_BYTE((-5) & 0xff); PUT_BYTE(0xc3); // ret } inline void set_edi(int v) { __asm__ __volatile__ ("movl %0, %%edi\\n" : : "r"(v) ); } inline int get_edi() { int rv; __asm__ __volatile__("movl %%edi, %0\\n" :"=r"(rv)); return rv; } int main() { int rv; unsigned char\* buf = make_exec_buf(); write_magic(buf); set_edi(0xcafe); printf("before edi=%x\\n", get_edi()); rv = ((int (\*)())buf)(); printf("after edi=%x got %x\\n", get_edi(), rv); return 0; } | https://blogs.oracle.com/nike/entry/hiding_instruction_on_cisc_cpus | CC-MAIN-2016-22 | refinedweb | 363 | 66.98 |
mmap_device_io()
Gain access to a device's registers
Synopsis:
#include <stdint.h> #include <sys/mman.h> uintptr_t mmap_device_io( size_t len, uint64_t io );
Since:
BlackBerry 10.0.0
Arguments:
- len
- The number of bytes of device I/O memory that you want to access. It can't be 0.
- io
- The address of the area that you want to access.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:
The mmap_device_io() function maps len bytes of device I/O memory at io and makes it accessible via the in*() and out*() functions in <hw/inout.h>.
In order to map physical memory, your process must have the PROCMGR_AID_MEM_PHYS ability enabled. For more information, see procmgr_ability().
Returns:
A handle to the device's I/O memory, or MAP_DEVICE io for len bytes is invalid.
- EPERM
- The calling process doesn't have the required permission; see procmgr_ability().
Classification:
Caveats:
You need I/O privileges to use the result of the mmap_device_io() function. The calling thread must:
- have the PROCMGR_AID_IO ability enabled. For more information, see procmgr_ability().
- call ThreadCtl() with the _NTO_TCTL_IO command to establish these privileges.
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/m/mmap_device_io.html | CC-MAIN-2015-27 | refinedweb | 216 | 61.43 |
Member
2 Points
Sep 02, 2007 10:23 AM|alvin_hongkong|LINK
Hi All,
I have some questions about the concept of MPF, I am getting confused to the different namespaces in SDK for examples I don't know what are the different between "Hosted Email 2007", "Exchange 2007" and "Managed Email 2007" coz I found those namspaces seem almost provide the sample functions.
And also, I am trying to write some functions for simple provisioning tasks via MPS Webservice but immediatly I encountered some performance issuse in which if i submit a webservice request to hostedEmail2007.getMailbox() it takes a minute get the result. It's obviously inacceptable and comparing to provtest it seems the bottle neck is on the MPS Webservice, any advice to improve the overall performance? (ie using MPSClientWrapper or Keeping a copy of provisioning data in SQL Server database or Directly talk to relevant native service API)
Alvin.
Sep 04, 2007 09:19 AM|sdupas|LINK
Hello Alvin,
In MPF (Microsoft Provisioning Framework), you have the namespaces which contain procedures that define logic, like any programming language. You have noticed that there are different "types" of namespaces (Managed, Hosted, Provider). It is actually a naming convention which is used in the Hosted Solutions to provide different layer of logic (from low-level interface to high business logic):
The "Hosted" Namespaces are the preferred ones if you want to submit procedures to the MPF engine.
You can have a look at the MPS SDK, it will give you some information on the MPS development:
HTH,
Samuel
Member
2 Points
2 replies
Last post Sep 05, 2007 05:33 AM by alvin_hongkong | https://forums.asp.net/t/1153731.aspx?MPF+Concept | CC-MAIN-2021-25 | refinedweb | 274 | 50.3 |
The Difference Between Public and Private in Java
Jeremy Grifski
Updated on
・7 min read
Coding Tangents (10 Part Series) Science code more secure from hackers. While this is largely untrue, there is some merit in the argument. Sure, nothing is stopping someone from using a feature like reflection to access private fields and methods. That said, access modifiers can help protect the average user from corrupting an object’s state.
Think about the windshield wiper example. When we turn on our wipers, we expect both of them to move at the same speed. Without restricted access, we could change the default speed of one of the wipers. Then, the next time we’d go to turn on the wipers… BAM! To avoid that problem, we encapsulate (or hide) the fact that we have two individual wipers in a single exposed (public) method.-private (default), and
protected. Each keyword offers a level of code access given by the following table:
In other words, we can rank the keywords in order of least accessibility:
- private
- package-private (default)
- protected
- public
For the duration of this tutorial, I will not be exploring the
package-private.
User-Defined Classes
Up until this point, we've been talking mostly about the philosophy of access modifiers, but what are the real world consequences and how do we actually use them? To help clarify those questions, let's take a moment to write some of our own classes which try to demonstrate the practical differences between
public and
private..
Windshield Wipers
Now, let's take this concept a step further by actually implementing the windshield wipers in Java (at least at a high level). To start, we'll make a car class that has a
private method for one wiper and a
public method for both wipers:
public class Car { private boolean[] wipers; public Car() { this.wipers = new boolean[2]; } private void turnOnWiper(int index) { this.wipers[index] = true; } public void turnOnWipers() { for (int i = 0; i < this.wipers.length; i++) { this.turnOnWiper(i); } } }
Here, we've created a Car class that stores a
private array of wiper states. For each wiper, their state is either on (
true) or off (
false). To turn a wiper on, we've written a
private method that lets you turn on a wiper by its index. Then, we bring all that together with a
public method that iterates through all the wipers and turns them all on.
Now, ignoring the realistic problem here which is that the wipers are being turned on in series, not parallel, we have a pretty solid solution. If someone were to instantiate a Car, they would only be able to turn on all the wipers at once.
public class CarBuilder { public static void main(String[] args) { Car car = new Car(); car.turnOnWipers(); // Turns on wipers! car.turnOnWiper(1); // Compilation ERROR car.wipers[0] = false; // Compilation ERROR } }
Fun fact: the user doesn't even know how the wipers are implemented, so we have full control to change the underlying architecture at any time. Of course, we still have to provide the same functionality, but how we get there is up to us. In other words, we could potentially change the wiper array to store integers. Then, for each wiper, the integer would correlate to speed.
Now, why don't you try expanding the class yourself. For example, I recommend adding a method to turn off the wipers. You may want to then write a new private method for turning off individual wipers, or you might find it makes more sense to refactor the
turnOnWiper method to take a boolean as well. Since the user never sees those methods, you have full control over the underlying implementation. Happy coding!!
Coding Tangents (10 Part Series)
Hi there...
Thanks for sharing this. I always like to find material on Basic subject that May help even ppl who dont program for work like me. I think that I grasped what is the use. Stille have 2 question: first is not really related to this subject but i Am not plain satisfied If i dont understand every little piece of code i read... how the pubblic method is supposed to know the lenght of the wipers array if we didnt set any? Am i losing something? Second question, why would i As a developer wanted this to be hidden in the code if only other developer are going to look at the code ? I mean final user Will Never see the code he will just click on a button or perform An action that will trigger my function/class... sorry If these are dumb questions but they really come from A layman.
Ooh these are great questions. It's my mistake on the Wiper example. In an effort to provide a couple method examples, I neglected to provide a constructor. You can assume the array of wipers is defined (length could be anything), but I'll update the example to include a constructor.
To answer your second question, there's a bit more nuance. You're totally right in your example. If your final application is a graphical user interface (GUI), you might not care about public vs. private. After all, the only thing you're exposing is a button.
However, if you're going to release the underlying class structure to the public, you may want to limit which methods you expose. There are a few reasons for that. For one, hiding certain functionalities allows you to limit how much control the user has over the underlying structures. In other words, you protect them from themselves. The other reason is that you're free to rework the underlying structure without messing up your public interface. For example, maybe you create a new data structure which has an exposed sorting method. You're free to change how sorting is accomplished as long as it provides the same functionality to the user.
Also, I just think it's good practice regardless. Having a clean and clearly defined interface between classes makes for a lifetime of painless maintainability.
The HellowWorld example is not the best fit to show how public and private work. It's much more clear with typical data structures like Stack, List etc. or with domain entities like Order, Invoice etc.
I appreciate the feedback, but I’m not sure I agree (at least with the first part). If we’re talking beginners, they aren’t going to be able to handle data structures beyond arrays (maybe ArrayLists, LinkedLists)—especially data structures in java which leverage generic types. If they knew those things already, then they probably wouldn’t need this article.
As for domain entities, to be quite honest, I’ve never heard that term. Granted, if we’re just talking about user-defined classes, then that’s fair. I just felt like it was important to come full circle on the example at the beginning of the article. When I get the chance, I’ll extend the last section to include a better example. Thanks! | https://dev.to/renegadecoder94/the-difference-between-public-and-private-in-java-3g2e | CC-MAIN-2019-43 | refinedweb | 1,177 | 64 |
table of contents
- buster 4.16-2
- buster-backports 5.04-1~bpo10+1
- testing 5.10-1
- unstable 5.10-1
NAME¶sched_setscheduler, sched_getscheduler - set and get scheduling policy/parameters
SYNOPSIS¶
#include <sched.h>
int sched_setscheduler(pid_t pid, int policy, const struct sched_param *param);
int sched_getscheduler(pid_t pid);
DESCRIPTION¶The sched_setscheduler() system call sets both the scheduling policy and parameters for the thread whose ID is specified in pid. If pid equals zero, the scheduling policy and parameters of the calling thread will be set.:
- SCHED_OTHER
- the standard round-robin time-sharing policy;
- SCHED_BATCH
- for "batch" style execution of processes; and
- SCHED_IDLE
- for running very low priority background jobs..
For each of the above policies, param->sched_priority specifies a scheduling priority for the thread. This is a number in the range returned by calling sched_get_priority_min(2) and sched_get_priority_max.
RETURN VALUE¶On success, sched_setscheduler() returns zero. On success, sched_getscheduler() returns the policy for the thread (a nonnegative integer). On error, both calls return -1, and errno is set appropriately.
ERRORS¶
-¶POSIX.1-2001, POSIX.1-2008 (but see BUGS below). The SCHED_BATCH and SCHED_IDLE policies are Linux-specific.
NOTES¶Further details of the semantics of all of the above "normal" and "real-time" scheduling policies can be found in the sched(7) manual page. That page also describes an additional policy, SCHED_DEADLINE, which is settable only via sched_setattr(2)..) | https://manpages.debian.org/buster/manpages-dev/sched_setscheduler.2 | CC-MAIN-2021-39 | refinedweb | 229 | 51.24 |
> Your filesystem handling code is completely superflous (and buggy). Please> remove all the code dealing with chroot-lookalikes. In your userland script> you simpl have to clone(.., CLONE_NEWNS) to detach your namespace from your> parent, then you can lazly unmount all filesystems and setup your new namespace> before starting the jail. The added advantage is that you don't need any> cludges to keep the user from exiting the chroot.I definately would prefer to use namespaces. I had originally wanted todo a copy_namespace() in the module. That function is not exported,though. Is doing that in user-space really the right way to do it?thanks,-serge-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2004/10/10/33 | CC-MAIN-2016-50 | refinedweb | 136 | 60.01 |
This is part of the Ext JS to React blog series. You can review the code from this article on the Ext JS to React Git repo.
In this post we’ll talk about how to get started with React. Coming from the Sencha ecosystem, you may be used to launching a project using Sencha Cmd to create a project folder, configuration file, and some initial file and folder scaffolding. The React universe is a bit less prescriptive in nature than with Ext JS. However, we do have access to a helpful utility that will offer us the same type of starting point: create-react-app.
Facebook Incubator’s
create-react-app module installs globally on your Windows, Linux, or macOS environment.
create-react-app generates a project folder, self-contained web server instance, unit testing framework, and an initial files and folder structure. The starter project hides the build config files resulting in less project overhead. The following steps will create an initial project. The following steps will create a starter app. Then we’ll incrementally add small React files to demonstrate the benefit of using
create-react-app to learn the React ecosystem.
Installing create-react-app
The
create-react-app README does a great job of detailing how to install and run
create-react-app. For a full walkthrough, refer to the source documentation. For our purposes, we’ll hit on some of the high level steps sufficient to get us going. Install
create-react-app using
npm install.
Note: If you do not already have a Node + NPM environment, please refer to Mozilla’s guide on “Setting up a Node development environment”.
Use the following terminal commands in the folder of your choosing to install
create-react-app and start its server instance:
npm install -g create-react-app create-react-app my-app cd my-app/ npm start
These steps would be similar to installing Sencha Cmd and running:
sencha app init --ext@6.5.1 --modern MyApp my-app sencha app watch
The
npm start command starts the server instance which you can view within your browser by navigating to. You should see something like the following in the browser:
Generated Files
The generated application’s
index.html page is located in
{app-root}/public/index.html. You shouldn’t need to edit this file in most situations. Its primary use, for our purposes, is to create the target
div that our React app will render to:
<div id="root"></div>
Taking a look at
{app-root}/src/index.js we see the following line that renders our output from
{app-root}/src/App.js to the “root” element defined in the
index.html:
ReactDOM.render(<App />, document.getElementById('root'));
What we’re seeing in the browser is a result of the React component class found in
{app-root}/src/App.js. The
render() method from
App.js returns the header, logo, and subsequent body text:> </div> ); } } export default App;
The underlying build process links the final
index.html page to the two CSS files responsible for application styling. Therefore, the overarching page styling is provided by the CSS rules defined in
{app-root}/src/index.css:
body { margin: 0; padding: 0; font-family: sans-serif; }
The elements output by the React component in
App.js are then styled by CSS rules described in
{app-root}/src/App.css.
); } }
The build’s CSS assets include
App.css by using an
include statement in the
App.js file defining the React component meant to use the style rules. The import line in the
App.js file incorporates the style rules for each build:
import './App.css';
Modifying Project Files
With “npm start” running, viewing changes you make to the starter app is easy. Simply add or modify content in your project and save it. The browser automatically refreshes on save to show your changes using Webpack’s Hot Module Replacement. Let’s update some text and the logo and save our project. Notice how
create-react-app publishes our changes to the browser real-time.
In
{app-root}/src/App.js let’s change:
<h2>Welcome to React</h2>
to
<h2>Welcome to Modus Create</h2>
Create a file named
{app-root}/src/logo2.svg with the following:
<svg viewBox="0 0 50 46" xmlns="" fill- <path fill="none" d="M0 0h49.77v45.37H0z"/> <g fill- <path d="M24.9 23.97c-.32 0-.63 0-.93-.03-.14 2.32-.6 4.54-1.35 6.64.75.07 1.5.1 2.27.1 12.35 0 22.42-9.86 22.76-22.16-2.6.34-5.02 1.32-7.07 2.78-1.58 7.23-8 12.67-15.7 12.67" fill="#ef3c23"/> <path d="M26.96 22.6v.05c2.28-.32 4.38-1.15 6.2-2.38 1.66-6.84 7.4-12.07 14.52-12.94-.06-2.56-.54-5.02-1.38-7.3C35.35 1.7 26.96 11.17 26.96 22.6" fill="#981936"/> <path d="M49.77 41.37c-7.35 0-13.62-4.65-16.03-11.17-1.57.63-3.2 1.1-4.92 1.37 3.5 8.12 11.56 13.8 20.95 13.8v-4z" fill="#f8981c"/> <path d="M0 45.37c12.6 0 22.8-10.22 22.8-22.8 0-8.63-4.77-16.13-11.8-20-.65 1.65-1 3.44-1 5.32 0 .9.08 1.8.24 2.7 4.16 3.1 6.85 8.06 6.85 13.66 0 9.44-7.68 17.1-17.1 17.1v4z" fill="#0a679b"/> <path d="M15.3 28.58c.38-1.37.6-2.82.6-4.3 0-1.14-.13-2.25-.36-3.3-4.08-2.93-6.74-7.7-6.74-13.1 0-2.07.4-4.06 1.12-5.88-2-.97-4.17-1.65-6.44-2-.9 2.46-1.4 5.1-1.4 7.88 0 9.17 5.4 17.07 13.22 20.7" fill="#179b48"/> </g> </svg>
Next, edit
{app-root}/src/App.js and change the import logo statement to include
logo2.svg instead of
logo.svg to see the image asset update real-time as you save.
Then, let’s halt the spinning behavior of the logo by commenting out the following line in
{app-root}/src/App.css:
/*animation: App-logo-spin infinite 20s linear;*/
Save all files and switch back over to your browser to see the changes:
Build Errors
Build errors display in both the browser viewport and the browser dev tools console. Non-critical warnings will display in the dev tools console only. For example, if we modify our
{app-root}/src/App.js file so that the closing
</h1> tag as
</h2> we’ll break the build by introducing a bug where the opening tag doesn’t match our closing tag. Saving the tag mismatch will throw the following error in our browser:
*Don’t forget to undo that intentional bug now!
Production Build
The steps we’ve taken so far have been in the development workflow. So, what about when you’re ready to package up all of the various CSS, JS, and other asset files for production? Fortunately,
create-react-app makes building your files for production easy as well! The workflow will be similar to stopping Sencha Cmd’s
app watch process and building using
sencha app build production. First, press
ctrl-c in your terminal window to stop the development server instance. Then, to build your project use the following statement in the terminal window (from the project’s root folder):
npm run build
Note: There are two package managers: npm and yarn. Nodejs installs npm and we’ll use that as our primary package installer. But, if you happen to be more familiar / fond of yarn, feel free to use it in place of npm throughout the blog series.
The resulting output is placed in the
{app-root}/build folder including the page’s single
{app-root}/build/index.html file that links to the optimized .js and .css assets used to run the single page application.
Onward to React
The
create-react-app setup is by no means a requirement. You’re welcome to use it as-is, start with it and eject to supply your own build configuration, or even start completely from scratch. Additionally, we welcome you to clone our own starter app as a way to launch your React application using our custom scaffold designed to deliver the most speed and best user experience. For simplicity, we’ll assume through the course of this blog series that you’ve started with
create-react-app as the examples will often define an
<App> component to be rendered to a “root” element in your index page. With our scaffolding process outlined, we’re now ready to forge ahead into the React ecosystem starting with how to define and instantiate React component classes.
>… | https://moduscreate.com/blog/ext-js-to-react-scaffolding/ | CC-MAIN-2020-40 | refinedweb | 1,528 | 65.01 |
Shopify Schema publication?
Is there a publication of Shopify's schema somewhere?
I have read the API Reference () but need something deeper that describes in addition to field types, things like field max-lengths and any content formats or restrictions.
Some of this can be deduced from the API reference but not everything..
- for strings, it is not clear where they are using MySql Text field (64K), a string (255) or some other limitation (like metafield namespace which I believe is 30).
- Are email fields checked for valid addressing?
- Is the a prescribed format for phone numbers or is this just a short text field?
- Which fields are required?
- Which fields can be null or empty?
I am trying to design my API app to fit within their actual limits rather than find them through debug.
Thanks Mark.
Answer
Solution:
You can go through shopify api document with resource properties, like if you want to work with shopify product api then go through order properties, where they explain every single attribute properties: () - fields limit - fields type - fields format - fields default values(null/empty/specific value) - fields required
for phone number and email shopify accept value as a string. There is no specific format validation.. | https://e1commerce.com/items/shopify-schema-publication | CC-MAIN-2022-40 | refinedweb | 204 | 71.55 |
Data Manipulation with Pandas
In the previous chapter, we dove into detail on NumPy and its
ndarray object, which provides efficient storage and manipulation of dense typed arrays in Python.
Here we'll build on this knowledge by looking in detail at the data structures provided by the Pandas library.
Pandas is a newer package built on top of NumPy, and provides an efficient implementation of a
DataFrame.
DataFrames are essentially multidimensional arrays with attached row and column labels, and often with heterogeneous types and/or missing data.
As well as offering a convenient storage interface for labeled data, Pandas implements a number of powerful data operations familiar to users of both database frameworks and spreadsheet programs.
As we saw, NumPy's
ndarray data structure provides essential features for the type of clean, well-organized data typically seen in numerical computing tasks.
While it serves this purpose very well, its limitations become clear when we need more flexibility (e.g., attaching labels to data, working with missing data, etc.) and when attempting operations that do not map well to element-wise broadcasting (e.g., groupings, pivots, etc.), each of which is an important piece of analyzing the less structured data available in many forms in the world around us.
Pandas, and in particular its
Series and
DataFrame objects, builds on the NumPy array structure and provides efficient access to these sorts of "data munging" tasks that occupy much of a data scientist's time.
In this chapter, we will focus on the mechanics of using
Series,
DataFrame, and related structures effectively.
We will use examples drawn from real datasets where appropriate, but these examples are not necessarily the focus.
Installing and Using Pandas¶
Installation of Pandas on your system requires NumPy to be installed, and if building the library from source, requires the appropriate tools to compile the C and Cython sources on which Pandas is built. Details on this installation can be found in the Pandas documentation. If you followed the advice outlined in the Preface and used the Anaconda stack, you already have Pandas installed.
Once Pandas is installed, you can import it and check the version:
import pandas pandas.__version__
'0.18.1'
Just as we generally import NumPy under the alias
np, we will import Pandas under the alias
pd:
import pandas as pd
This import convention will be used throughout the remainder of this book. if you need a refresher on this.)
For example, to display all the contents of the pandas namespace, you can type
In [3]: pd.<TAB>
And to display Pandas's built-in documentation, you can use this:
In [4]: pd?
More detailed documentation, along with tutorials and other resources, can be found at. | https://jakevdp.github.io/PythonDataScienceHandbook/03.00-introduction-to-pandas.html | CC-MAIN-2019-18 | refinedweb | 453 | 52.09 |
Girish Vasmatkar wrote:
There are other application servers that allow to access the data source (JNDI) using java:comp/env. But when I try to use JBOSS without making change in the code, I get errors related to name not found and so.
I read somewhere that JBOSS does not use java:comp/env, instead you must access data source using java: namespace. Is this notion correct?
I have one more doubt, is this Global JNDI namespace concept specific to JBOSS? The reason I am asking this is because JBOSS allows only those data sources to be accessed from remote client which are in the global JNDI namespace, and not the ones which are under java: namespace. The other app. servers like Glassfish do allow data sources to be looked up with their name(remotely, from different JVM)
From ferwhat i know, all application servers have namespaces within the JNDI tree. They might term the namespace differently. Not allowing the datasource to be accessed by a remote JVM (by default) is specific to JBoss. | http://www.coderanch.com/t/485644/JBoss/java-comp-env-JBOSS | CC-MAIN-2014-41 | refinedweb | 174 | 71.24 |
()
Starting with version 2.6, the module also defines two convenience functions:
Create a Timer instance with the given statement, setup code and timer function and run its repeat() method with the given repeat count and number executions.
New in version 2.6.
Create a Timer instance with the given statement, setup code and timer function and run its timeit() method with number executions.
New in version 2.6.
When called as a program from the command line, the following form is used:
python -m timeit [-n N] [-r N] [-s S] [-t] [-c] [-h] [statement ...]
where the following options are understood: default timer function is platform dependent. On Windows, time.clock() has microsecond granularity but time.time()‘s granularity is 1/60th of a second; on Unix, time.clock() has 1/100th of a second granularity and time’s -O option for the older versions to avoid timing SET_LINENO instructions.
Here are two example sessions (one using the command line, one using the module interface) that compare the cost of using hasattr() vs. try/except to test for missing and present object attributes.
% timeit.py 'try:' ' str.__nonzero__' 'except AttributeError:' ' pass' 100000 loops, best of 3: 15.7 usec per loop % timeit.py 'if hasattr(str, "__nonzero__"): pass' 100000 loops, best of 3: 4.26 usec per loop % timeit.py 'try:' ' int.__nonzero__' 'except AttributeError:' ' pass' 1000000 loops, best of 3: 1.43 usec per loop % timeit.py 'if >>>>> t = timeit.Timer(stmt=s) >>> print "%.2f usec/pass" % (1000000 * t.timeit(number=100000)/100000) 3.15 usec/pass
To give the timeit module access to functions you define, you can pass a setup parameter which contains an import statement:
def test(): "Stupid test function" L = [] for i in range(100): L.append(i) if __name__=='__main__': from timeit import Timer t = Timer("test()", "from __main__ import test") print t.timeit() | http://docs.python.org/release/2.6.6/library/timeit.html | CC-MAIN-2013-20 | refinedweb | 311 | 68.77 |
UK Law
Get UK Law Questions Answered by Verified Experts
Hi
Thanks for your patience.
If there is no mortgage on the property and you own it in your sole name then there is nothing preventing you legally from doing this. However, if you decide to proceeds then it would be wise to instruct a solicitor.
You would have to download and draft a Land Registry TR1 form from the land registry website:
You would then have to execute this with your mother.
If it is the case that you are transferring the property to your mother is in settlement of the debt that you owe to her then you will have to decide what amount to enter in the box of the transfer for the “consideration” of the transaction. It may be correct that you should enter the sum you are repaying in this box. If so, then your mother would have to submit a stamp duty return (and settle any stamp duty) so that the SDLT land transaction return can be submitted when you attempt to register it (it won’t be registered otherwise).
Also, you would want something in writing from your mother confimring that your debt to her is settled as a result of the transfer.
Once you have executed the transfer you would then have to submit form AP1 (from the above website) to the relevant land registry office with the transfer, SDLT5 (if applicable (ie if there was a consideration for the transfer) and the requisite land registry fee. You can work out the LR fee here:
My goal is to provide you with a good service. If you feel you have received anything less, please reply back as I am happy to address follow-up issues specifically relating to your question.
Kind regards,
Tom
Is there any further information you require?
I just want to ensure that you are satisfied, so please let me know if you have any further queries on the information I have provided. If you have no further questions then please do leave feedback. | http://www.justanswer.com/uk-law/9ea0v-ben-transfer-ownership-house-mother-no-mortgage-it.html | CC-MAIN-2017-17 | refinedweb | 345 | 63.53 |
XSLTForms/Known Restrictions
This page lists gaps in XSLTForms' coverage of the XForms 1.0 and 1.1 specifications, and other restrictions that may not be obvious to new users.
(In its current state the page is not at all complete or systematic, but a list of things that have caught the eye of some user or other, who has listed them here. Please help make the list more complete and more useful to new users of XSLTForms, by recording here any limitations you encounter and ways to work around them.)
Gaps and limits in coverage[edit]
These are cases where XSLTForms falls short of perfect conformance to the spec, so that forms which are strictly speaking legal and which work in other XForms processors may not work in XSLTForms.
- XML comments in instance are supported but they should not contain any greater-than signs (>).
- The built-in datatypes gYear, gYearMonth, date, and dateTime support only four-digit years beginning with the digit 1 or 2, so dates like (for example) 0800-12-25 (Christmas Day, AD 800, Charlemagne crowned emperor) are not accepted.
- gYear, date, etc. do not accept time-zone information, so a date like 2010-08-01 (1 August 2010) is accepted as valid, but not 2010-08-01Z (the 24-hour period beginning at 00:00 on 1 August 2010 in UTC) or 2010-08-01-04:00 (the 24-hour period beginning at 00:00 on 1 August 2010 in Eastern Daylight Time).
- The XForms spec allows the
xf:instanceto be omitted entirely from the model; a very simple flat XML structure is then assumed. XSLTForms requires an explicit instance.
- The XForms spec says that "It is an error to attempt to set a model item property twice on the same node"; this can be read as allowing different model item properties to be set on a node from different
bindelements. XSLTForms raises an error if more than one
bindelement affects the same node, even if they are setting different model item properties. (Workaround: set all model items properties for any node in a single
bindelement.)
A fuller sense of current gaps in conformance can be obtained by looking at the XForms 1.1 test suite results for XSLTForms. (Warning: that page is updated only infrequently and is not necessarily current.)
Quirks, idiosyncrasies, and things you might not expect[edit]
These are things that are not strictly speaking XForms conformance issues, but which may cause some head-scratching or which may work differently in different XForms processors.
- Forms served from an HTTP server should have a MIME type of
application/xml,
text/xml, or
application/xhtml. Forms served as
text/htmlwon't work (browsers don't apply XSLT stylesheets to HTML documents).
- Forms loaded from a local file system will or won't work, depending on what MIME type the browser associates with the file extension. If
.htmldoesn't work, try
.xhtmlor
.xml.
- As a workaround for FireFox not supporting namespace axis in its XSLT engine, there are some situations where a dummy element or attribute must be added somewhere in the XForms document to make it possible to extract the prefix / namespace binding. One example: If an external instance uses a namespace not used within the XForm document itself, it will be necessary to declare the instance's namespace and use it in the form, e.g. by adding
xmlns:myns="" myns:dummy="it doesn't matter what goes in this attribute"on the root HTML element. (This allows XSLTForms to locate the binding of prefix
mynsto the namespace in question.) For longer examples see the wikibook XSLTForms and eXist and the discussion of user-defined functions in this wikibook.
- Controls sometimes malfunction if placed within XHTML
pelements; use XHTML
divelements to wrap them, instead. (Reason: XSLTForms generates XHTML
divelements for alert messages and the like; these interact badly with containing
pelements.) | http://en.wikibooks.org/wiki/XSLTForms/Known_Restrictions | CC-MAIN-2014-15 | refinedweb | 648 | 59.43 |
Byte Ordered Partitioner (BOP) is a scheme to organize how to place the keys in the Cassandra cluster node ring. Unlike the RandomPartitioner (RP), the raw byte array value of the row key is used to decide which nodes store the row. Depending on the distribution of the row keys, you may need to actively manage the tokens assigned to each node to maintain balance.
As an example, if all row keys are random (type 4) UUIDs, they are already evenly distributed. However they are 128 bits, unlike the 127 bit tokens used by RP, and the initial tokens must be specified as hex byte strings instead of decimal integers. Here is python code to generate the initial tokens, in a format suitable for cassandra.yaml and nodetool:
def get_cassandra_tokens_uuid4_keys_bop(node_count): # BOP expects tokens to be byte arrays, specified in hex return ["%032x" % (i*(2**128)/node_count) for i in xrange(0, node_count)]
Note that even if your application currently uses random UUID row keys for all data, you may run into balancing issues later on if you add new data with non-uniform keys, or keys of a different size. This is why RP is recommended for most applications. | http://wiki.apache.org/cassandra/ByteOrderedPartitioner | crawl-003 | refinedweb | 200 | 55.47 |
1. Drag any plugin on grid.
2. On 'createClass' config type a class that includes the namespace (e.g. MyApp.CustomGridPlugin)
3. It throws and error saying 'createClass: property must begin with...'
1. Drag any plugin on grid.
2. On 'createClass' config type a class that includes the namespace (e.g. MyApp.CustomGridPlugin)
3. It throws and error saying 'createClass: property must begin with...'
We are the same issue on the error when using dot (".") in createClass config.
Is this a bug?
Hello..
That was our first option but if we build the modules individually we would get redundant classes. I'm not sure how it affects our application but that would definitely add bytes....
Currently, we are creating project that has 50 modules.
We are loading each modules when the login form shows in the background using Ext.require to make our application responsive or in order to...
Is there a possible workaround for this?
REQUIRED INFORMATION
Ext version tested:
Ext 4.2.1
Browser versions tested against:
Any
Description:
yes i did.
for more information about my problem please see this post
Is there an event that i could use to identify that all elements have been rendered on the dom? I want to render my viewport after the other elements to make sure that it would layout correctly and...
I've inspected the DOM and found out that you're right. :(
Even your documentation has an issue to that.
Why it was behaving correctly on firefox?
Can we have a workaround? :)
Hello I've created a custom plugin which extends from AbstractPlugin but it seems it is not initializing.
Here's some info about the plugin.
- It was created in architect.
- The plugin was based...
Hi Aaron Conran
I would like to know there is an update how we can add custom column in the grid. My suggestion is to allow the xtype can be change by putting the xtype property in the object...
Hi
the code below didnt work. I wonder why it didnt trigger the metachange? Can someone help me what is my mistake
readRecords: function(datastr) { var data = {}; ...
Hi
When I use this kind of Code the Code will work correctly
Ext.define('Ext.ux.data.reader.DynamicReader', {
extend: 'Ext.data.reader.Json',
alias: 'reader.dynamicReader',
... | https://www.sencha.com/forum/search.php?s=4b95cc6a2cf747e2792b247f36fbe7d0&searchid=19245587 | CC-MAIN-2017-22 | refinedweb | 382 | 69.48 |
What do E4X expressions return?CMcM00 Jan 5, 2010 4:44 PM
I'm trying to figure out XMLListCollections and E4X and since I can't find any programming reference
I have no idea what E4X expressions do under the covers.
Do they return an Object class? An XML class? An XMLList class? void?
If this is my xml:
<months> <month id="jan"> .... </month> <month id ="feb"> .... </month> </months>
and dataList is the XMLListCollection containing the entire XML record, then why doesn't this work?
var janSubtree:XMLListCollection = new XMLListCollection(dataList.getChildAt(0).month.(@id == "jan"));
This does work as expected:
trace(dataList.getChildAt(0).month.(@id == "jan"));
While I sure appreciate any correct answers, I would *really* appreciate someone
explaining exactly what E4X is really doing. I'd love to gain some insight into the gory details.
Thanks for your help!!
Cory
1. Re: What do E4X expressions return?Flex harUI
Jan 5, 2010 6:05 PM (in response to CMcM00)
Use toXMLString to dump the nodes at each level of the expression. Most E4x expressions return XMLLists. It is documented in the ASLR and in the Ecmascript spec.
Alex Harui
Flex SDK Developer
Adobe Systems Inc.
2. Re: What do E4X expressions return?CMcM00 Jan 6, 2010 1:36 AM (in response to Flex harUI)
Hey Alex,
Flex harUI wrote:
Use toXMLString to dump the nodes at each level of the expression.
Thanks for your reply, but I'm not following.
This doesn't work:
var janSubtree:XMLListCollection = new XMLListCollection(dataList.getChildAt(0).month.(@id == "jan").toXMLString());
Nor does this:
var janSubtree:XML = new XML(dataList.getChildAt(0).month.(@id == "jan").toXMLString());
Nor does this:
var janSubtree:XMLList = new XMLList(dataList.getChildAt(0).month.(@id == "jan").toXMLString() as XML);
What am I not understanding?
Flex harUI wrote:Most E4x expressions return XMLLists. It is documented in the ASLR and in the Ecmascript spec.
If the e4x expression is returning a XMLList, then certainly my original construction should have worked nicely, no?
var janSubtree:XMLListCollection = new XMLListCollection(dataList.getChildAt(0).month.(@id == "jan"));
Also, when I google ASLR, I get results of "Address Space Layout Randomization", a technique for randomly laying out objects in system memory to guard against hacker exploits? Not sure how that's related to XMLListCollections or e4x...
And when I take a peek at the Ecmascript for XML spec:
I see this section on attribute identifiers:
11.1.1 Attribute Identifiers Syntax E4X extends ECMAScript by adding attribute identifiers. The syntax of an attribute identifier is specified by the following production: AttributeIdentifier : @PropertySelector @ QualifiedIdentifier @[Expression ] PropertySelector : Identifier WildcardIdentifier Overview An AttributeIdentifier is used to identify the name of an XML attribute. It evaluates to a value of type AttributeName. The preceding “@” character distinguishes a XML attribute from a XML element with the same name. This AttributeIdentifier syntax was chosen for consistency with the familiar XPath syntax. Semantics The production AttributeIdentifier : @ PropertySelector is evaluated as follows: 1. Let name be a string value containing the same sequence of characters as in the PropertySelector 2. Return ToAttributeName(name) The production AttributeIdentifier : @ QualifiedIdentifier is evaluated as follows: 1. Let q be the result of evaluating QualifiedIdentifier 2. Return ToAttributeName(q) The production AttributeIdentifier : @ [ Expression ] is evaluated as follows: 1. Let e be the result of evaluating Expression 2. Return ToAttributeName(GetValue(e))
So I suppose that "Return ToAttributeName (GetValue(e))" means that my above e4x expressions return a String? I have to say, I wouldn't exactly call this "documented".
I have to say - on the one hand, I wonder if I'm just being an idiot for not understanding this, and on the other hand, I'm really frustrated that something that should take someone about 10-15 minutes to look up the answer to in an index is taking me at least a half a day if not more to dig through Asdoc pages, google searches, etc, with still no answer. I do VERY much appreciate your your help, but I am more than a bit frustrated trying to figure out something that should be trivially simple - and I feel would be if the flex documentation were more complete. Perhaps it's just late and I'm not thinking as clearly as I should be.
I suppose I should stop being stubborn, and just write an actionscript function to manually do this and stop trying to filter out xml subtrees with e4x, but it seemed like it should be a very simple thing to do.
Cheers,
Cory
3. Re: What do E4X expressions return?CMcM00 Jan 6, 2010 2:14 AM (in response to CMcM00)
Ok, just thought of something...
I tried doing this:
var janSubtree:Object = dataList.getChildAt(0).month.(@id == "jan");
and inspecting janSubtree in the debugger. It's a XMLList all right. Except that inspecting the variable shows it to be empty. Which is odd since this does print the jan subtree data properly:
trace(
dataList.getChildAt(0).month.(@id == "jan"));
Oh, and this doesn't work either:
var janSubtree:XMLList = new XMLList(dataList.getChildAt(0).month.(@id == "jan"));
I'm confused.
4. Re: What do E4X expressions return?pauland Jan 6, 2010 9:03 AM (in response to CMcM00)
Maybe this will help. The E4X expression returns an XMLList.
[edit: I was working on an AIR project, but you get the idea.. ]
<?xml version="1.0" encoding="utf-8"?>
<s:WindowedApplication xmlns:fx=""
xmlns:s="library://ns.adobe.com/flex/spark"
xmlns:
<fx:Declarations>
<!-- Place non-visual elements (e.g., services, value objects) here -->
</fx:Declarations>
<fx:Script>
<![CDATA[
import mx.collections.XMLListCollection;
public var dataXML:XML=<months>
<month id="jan">
jan
</month>
<month id ="feb">
feb
</month>
</months>;
public function init():void
{
var janSubtree:XMLListCollection = new XMLListCollection(dataXML.month.(@id == "jan"));
trace(dataXML.month.(@id == "jan"));
}
]]>
</fx:Script>
</s:WindowedApplication>
5. Re: What do E4X expressions return?Flex harUI
Jan 6, 2010 9:30 AM (in response to CMcM00)
I repeat my recommendation to use toXMLString on the expressions to see what is there. If getChildAt(0) is a month, then there is no .month in it.
Alex Harui
Flex SDK Developer
Adobe Systems Inc.
6. Re: What do E4X expressions return?CMcM00 Jan 6, 2010 9:47 AM (in response to Flex harUI)
Hey Paul/Alex,
Thanks so much for your help. I figured out what was wrong. Turns out that my xml document was valid xml at the root level, but not valid xml at the month level. Therefore casting to any XML class type failed.
Sorry for my little whine-fest, I was a little exhausted and frustrated last night.
In summary, using valid xml, this worked:
var janSubtree:XMLListCollection = new XMLListCollection(dataXML.month.(@id == "jan"));
Cheers,
Cory | https://forums.adobe.com/thread/550471 | CC-MAIN-2017-39 | refinedweb | 1,120 | 51.04 |
On Mon, Nov 16, 2009 at 8:54 AM, Steve Ferg <steve.ferg.bitbucket at gmail.com> wrote: > This is a question for the language mavens that I know hang out here. > It is not Python related, except that recent comparisons of Python to > Google's new Go language brought it to mind. > > NOTE that this is *not* a suggestion to change Python. I like Python > just the way it is. I'm just curious about language design. > > For a long time I've wondered why languages still use blocks > (delimited by do/end, begin/end, { } , etc.) in ifThenElse statements. > > I've often thought that a language with this kind of block-free syntax > would be nice and intuitive: > > if <condition> then > do stuff > elif <condition> then > do stuff > else > do stuff > endif > > Note that you do not need block delimiters. <snip> > Does anybody know a language with this kind of syntax for > ifThenElseEndif? Ruby: if count > 10 puts "Try again" elsif tries == 3 puts "You lose" else puts "Enter a number" end Cheers, Chris -- | https://mail.python.org/pipermail/python-list/2009-November/558408.html | CC-MAIN-2014-15 | refinedweb | 175 | 82.95 |
11 July 2012 17:58 [Source: ICIS news]
TORONTO (ICIS)--Energy regulators in ?xml:namespace>
The Energy Resources Conservation Board (ERCB) said it found that it is in the public interest to proceed with the project, which will be built near Shell’s oil sands upgrader site north of
The ERCB said most of the 23 conditions it imposed relate to additional data collection, analysis, and reporting.
Oil major Shell announced the project in 2009.
Shell plans to capture carbon dioxide (CO2) from its Scotford oil sands upgrader and store it permanently 2km (1.2 miles) deep under thick layers of impermeable geological formations.
Canada’s federal and the
Shell has said that it views CCS as one key pathway to reduce CO2 emissions.($1 = C$1 | http://www.icis.com/Articles/2012/07/11/9577517/alberta-energy-regulator-approves-shell-ccs-project.html | CC-MAIN-2015-18 | refinedweb | 127 | 59.84 |
printf Example
Part of TutorialFundamentals
It's better to use writef these days, but here's a printf example just for the heck of it.
int main(char[][] args) { printf("Hello World\n"); return 0; }
Here's the same thing with writef:
import std.stdio; int main(char[][] args) { writefln("Hello World"); return 0; }
However when using the %s formatting token with printf you must be careful to use it with the embedded length qualifier. This is because the difference between C++ and D is that C++ strings are zero-terminated character arrays referenced by an address, but in D they are a dynamic array object which is really an eight-byte structure containing length and pointer.
int main(char[][] args) { printf("%.*s\n", args[0]); // printf("%s\n", args[0]); // <<-- This will fail. return 0; } | http://dsource.org/projects/tutorials/wiki/PrintfExample | CC-MAIN-2018-30 | refinedweb | 136 | 51.48 |
On Thu, 24 Aug 2006 14:30:48 -0400, "Chaz." <eprparadocs at gmail.com> wrote: >So my question is why is subclassing internet.TCPServer not a good idea? In the general case, the short version is: "Because we wrote that code, and we say so." Usually I am not a big fan of the "argument from authority", but in this case, it has a special significance. When a developer on a library you're using says "this is the correct, supported method to use interface XYZ", that means that is the method they are going to be supporting in the future, and unsupported usages may break. It is in your best interest to stick to the supported usage if you ever intend to upgrade that library. In a future version of twisted.internet.application, for example, it may be deemed a good idea to make TCPServer and friends a set of functions rather than classes for some reason. Since you're supposed to be calling them and not subclassing them, that usage will continue to work, but subclassing won't. Calling is generally better than subclassing anyway. When you subclass, a number of undesirable things happen: in any language with multiple inheritance this is a problem, but in Python especially, you inherit all sorts of things from your superclass other than simple functionality. For one thing, object semantics. Maybe you used to be a classic class, but subclassing turns you into a new-style class before you're ready to make that switch. Maybe your superclass has a bizarre metaclass which performs some mutation you didn't expect. Maybe it has a __new__ which does something weird. Then you inherit state. Your 'self.' namespace becomes polluted with additional variable names that may conflict with your own. These names may change in future releases, so even if they don't conflict now, they may in future releases. While inheritance can be a useful tool, it is a lot more complex than composition, so you should generally avoid it unless all these ugly side-effects are actually desirable properties in your particular application. In the case of twisted.application.internet, they are not. That's not to say they *never* are: default implementation classes paired with interfaces can insulate your applications from certain varieties of change in libraries, and sometimes all those object-model features described as annoyances above are incredibly useful (most usually in persistence libraries like Axiom or ZODB). | http://twistedmatrix.com/pipermail/twisted-python/2006-August/013881.html | CC-MAIN-2017-30 | refinedweb | 409 | 63.39 |
This is my story about searching for Japanese pop music under a free culture license. It's a little tricky, because the best sites for this are of course, in Japan, and not well advertised on the English web. I discovered how to use Python's XMLRPC library to run searches using the web API for a Japanese music sharing site called "Wacca". The results were very interesting -- I found some of what I was looking for, though not all started when I was looking for character theme music on Jamendo for Lunatics. One of the characters is Japanese, and while she probably doesn't listen to J-Pop or Enka much herself (she's more of a classical piano type), we felt there might be some times when the cultural reference would be appropriate.
Clearly what I needed was to find a Japanese music sharing site
Well, as it turns out, Jamendo's selection of Japanese popular music is poor. I found three actual J-Pop albums under By-NC-SA licenses (which I can't use as is, and, considering the state of my Japanese, negotiating for a re-licensed track might be beyond my capabilities). In fact there is only one "single" track ("Kotoba Wa Sonomama") of J-Pop under a By-SA license, by "Hanadix" (though it's catchy, I admit). I suppose this is not so surprising, Jamendo is a French website. Their collection of American Country and Western music isn't that impressive either (perhaps I'll tackle in another column).
Finding a Japanese Music Site
Of course, I also did some general websearches on Enka and Creative Commons licenses (By and By-SA, specifically). I found nothing this way, and of course, lots of useless links -- it's not a narrow enough way to search, because even when these terms occur together on a page, it doesn't mean you found a music download under that license. Of course, I had to do these searches in English and Japanese. For the record, "Enka" is spelled in Japanese, "演歌", Creative Commons is "クリエイティブコモンズ" (that's just a transliteration of the English into katakana characters) and Attribution-ShareAlike is "表示 - 継承".
I think the best description of Enka that I can provide is that it is the Japanese equivalent of Country and Western music ("The music of pain" as Joss Whedon once described it). Here's a downloadable example (though it's By-NC licensed). It serves much the same emotional purpose in Japan: nostalgia for a past way of life, loves lost, and so on. If you sing a "somebody-did-somebody-wrong" song in Japanese, it'll probably be Enka. It's frequently found its way into Anime as comic exaggeration of a characters' miserable circumstances. Occasionally, it even gets used without irony.
Clearly what I needed was to find a Japanese music sharing site, that would be much more likely to have a wider variety of less-westernized music. After doing a number of searches for meta-sources (by which I mean sources that list sources), I did finally find on: Wacca. It was a challenge to navigate since all of the buttons are labeled in Japanese (there is no "English" button). There are a number of tricks for navigating such sites. I should probably mention a few:
- Pay attention to your browser's status bar as you mouse-over buttons, URLs often use English words, even when the site language is different
- You can always give the URL to Yahoo's "Babelfish" translation service. This will machine-translate the text on the page (though not the graphical buttons)
- Put yourself in the shoes of the designer -- where would you put the button you need to find?
- Be sure to check the original page, especially if you can read the foreign language at all. Sometimes Babelfish makes unreadable hash out of the text. It's only a machine.
It was a challenge to navigate since all of the buttons are labeled in Japanese
This part was really fun. For one thing, I finally did find Enka and a variety of other styles of Japanese music available for download (which is progress -- I'd never found it in my previous attempts). But indeed, the great majority of the music is under non-commercial or no-derivatives licenses. It's a very inefficient strategy to search for the music you want and only then read the license. So I needed to be able to search by license (I described how to do this at Jamendo in a previous column). However, Wacca is not so convenient -- it's possible to limit a search to "Creative Commons only", but there's no way to limit it to the much-less-used free licenses (it's clear to me that Wacca, and probably the Japanese web in general, is a few years behind the West in adopting free culture licenses; which is not terribly surprising, given that the Creative Commons is an American/English-language organization).
It is possible to search the site using Google, using the Creative Commons license name in Japanese, which does occur on the song's page (unlike Jamendo, uses a per-song licensing model (rather than per-album). Wacca's approach actually makes more sense to me here).
Using the Web API
However, I did notice a little button on the help page that says APIについて ("about the API"). Ooh. API. I can work with that. Using Babelfish to translate the API documentation pages into English (and referencing the originals), I was able (after a few false starts) to work out how to use Wacca's XMLRPC API to do the exact searches I wanted. Now we're cookin'!
Now of course, I always find it a little daunting when I realize I'm going to have to write a program to do what I want. But in this case, it's not so bad
Now of course, I always find it a little daunting when I realize I'm going to have to write a program to do what I want. But in this case, it's not so bad. All I need is a little ad hoc script. In fact, I really could (and in fact, did) do all of this stuff in the Python interactive interpreter. I'm going to keep this dead simple, because if I start trying to trick everything up to make it generally usable, I'll just get bogged down and never finish it. So here's what I did, and if you want to make a general tool from it, be my guest.
As is common around the world, the API itself uses English words. I'm not sure about the politics of this, or how other people feel about it, but I have to say that as a native speaker of English, I am truly grateful for this fact. So, indeeed, the Wacca API is in English, though the data we'll receive will be in UTF-8 encoded Japanese.
First of all, of course, I have to go find the Python documentation for the
xmlrpclib module and learn how to use it. This is not as well-written as I would like, which is responsible for my false starts. But let me demonstrate how to use it to retrieve Wacca's "genre list". Wacca's API uses "
genre" and "
sub_genre" to identify songs in its database, and these are just integers. The genre list will tell me what the numbers actually mean.
>>> import xmlrpclib >>> wacca = xmlrpclib.ServerProxy('') >>> >>> genrelist = wacca.song.getGenreList() >>> >>>>> for genre in genrelist: ... genrelist_text += "%d.%d=%s\n" % (genre['genre_id'], genre['sub_genre_id'], genre['genre_name']) ... >>> open("wacca_genrelist.txt", "wb").write(genrelist_text.encode("utf-8")) >>>
The first two lines import the
xmlrpclib module and establish a proxy object for the remote server. This does not actually make any outside contact. The next line does that, by calling the "
song.getGenreList()" method on "
wacca", I'm really making a database query to the server. So this line takes a few seconds to run.
The result,
genrelist, is a list of python dictionary objects with the appropriate mappings.
After that, I simply collate the data to create a readable text list and dump that out to a file. Of course, I could just print out the list to the terminal, but I want to save this list to a file so I can refer to it later.
I won't reproduce the whole file that results here, but here's the part I was interested in:
LIST OF GENRES DEFINED at wacca.fm (genre, sub_genre, genre_name) 1.0=ポップス 1.1=ポップス 1.2=フォーク 1.3=歌謡曲 1.4=演歌 1.5=童謡 2.0=ロック 2.1=ロック 2.2=ハードロック ...
Now, of course, the names are in Japanese, but I can read some of these. In English, these are "pop" (or "poppsu", since it really means something a little different in Japanese), "folk", "kayoukyoku" (another older style of Japanese pop music distinct from enka mainly in singing style), "enka", one I don't recognize, and of course, "rock" and "hard rock". It's primarily the "1" genres that I'm interested in searching (many of the other genres are Western genres that are better represented on Jamendo anyway -- not that I didn't look, but I won't go into that here).
The search interface is more interesting, and requires me to pass XMLRPC parameters
The search interface is more interesting, and requires me to pass XMLRPC parameters. I found the documentation very confusing on this point, but in fact, what you are supposed to do is to pack the parameters into a Python dictionary and use that as the argument to the remote method. The Wacca API for the
song.Search method provides the definitions for the parameters I'm using here.
For the record, you can sort Creative Commons licenses completely with the interface, although it may not seem entirely intuitive. All have the value of
creativecommons" (many Wacca tracks do not use Creative Commons licenses at all). Here's the breakdown:
Sorting out the Creative Commons licenses in the Wacca search interface
Now it's possible to answer my question about whether By-SA Enka exists:
>>> bysa_enka = wacca.song.Search({'genre':1, 'sub_genre':4, 'copyright':'creativecommons', 'copyright_commercial':'yes', 'copyright_modifications':'share'})
Unfortunately, what you get is a traceback:40, in _parse_response return u.close() File "/usr/lib/python2.5/xmlrpclib.py", line 787, in close raise Fault(**self._stack[0]) xmlrpclib.Fault: <Fault -2: 'NO_RESULT'>
So it seems that the answer is "no". A similar search for By-licensed Enka...
>>> by_enka = wacca.song.Search({'genre':1, 'sub_genre':4, 'copyright':'creativecommons', 'copyright_commercial':'yes', 'copyright_modifications':'yes'})
gives the same results. So it seems that the wonderful world of Japanese enka has not yet been touched by the concept of true free licensing. There are quite a few under the By-NC license, though:
>>> bync_enka = wacca.song.Search({'genre':1, 'sub_genre':4, 'copyright':'creativecommons', 'copyright_commercial':'no', 'copyright_modifications':'yes', 'limit':1000}) >>> len(bync_enka) 103
So should I decide it's worth the trouble to try to negotiate for a relicense of a track in a foreign language, I can at least find them.
I can look at some details on one of these tracks, like so:
>>> print "(%s, %s) -> %s" % (bync_enka[17]['artist_name'], bync_enka[17]['song_title'], bync_enka[17]['url']) (京鈴, 瀬戸内無情) ->
And I can simply paste the URL for the track into my browser location bar if I want to take a closer look.
There are some other types of music that are available under free licenses, so I might like to give those a listen:
>>> by_kayoukyoku = wacca.song.Search({'genre':1, 'sub_genre':3, 'copyright':'creativecommons', 'copyright_commercial':'yes', 'copyright_modifications':'yes'}) >>> len(by_kayoukyoku) 1 >>> >>> print "(%s, %s) -> %s" % (by_kayoukyoku[0]['artist_name'], by_kayoukyoku[0]['song_title'], by_kayoukyoku[0]['url']) (奥様レコード, なめこ汁) ->
Ha. There's one kayoukyoku track. It's here.
Ah Well, Just Exploring
Well, it appears my quest for free-licensed Enka is unfulfilled, but there is plenty of other Japanese pop on the site. Searching under "poppsu", for example turns up 47 By-licensed tracks and 29 By-SA-licensed tracks. This is much better than my results with Jamendo. In total there are 168 "By-SA" tracks of all genres and 205 "By" tracks of all genres on Wacca (on the day I did this search, of course -- the results will naturally change over time).
It is of course, a very tiny drop in the bucket amongst all the other tracks on Wacca, which is the reason this API search was necessary to find out what I wanted to know. But it's still interesting stuff. I'm currently working on a script to collect this information in a more accessible way. Fun stuff!
Classifying Licenses from Wacca Results
By way of postscript, I do want to share one more handy function I wrote for working with this data. The data returned by the search calls is a list of dictionaries containing data about each song. But it preserves the same odd way of expressing Creative Commons licenses. So I wrote this function to classify the license in the way I'm used to:
def classify_cc(track): """ Convenience function to identify CC licenses from the metadata that wacca.fm provides. """ if not track['copyright']=='creativecommons': s = "ARR" return s else: s = "CC " if track['copyright_commercial']=='no': s += "By-NC" else: s += "By" if track['copyright_modifications']=='no': s += "-ND" if track['copyright_modifications']=='share': s += "-SA" return s
The expected argument is the dictionary contained in each element of the return list. So you can call this function to sort a set of results, or to identify the licenses using common abbrevi (most of the images in this article are fair-use screen captures). I waive all rights to the code examples, so they may be treated as Public Domain, see CC0. | http://www.freesoftwaremagazine.com/articles/my_quest_free_licensed_japanese_pop_music_wacca_fms_xmlrpc_api_and_pythons_xmlrpclib | CC-MAIN-2013-48 | refinedweb | 2,327 | 59.74 |
As most UNIX or Linux users probably know, Samba is the Open Source Software package that allows a UNIX or Linux server to participate in a Windows network, and even become the Windows primary domain controller (PDC). In fact, it is possible for a properly configured Samba server to completely replace its Windows equivalent in the overwhelming majority of cases, affording a level of performance, stability and economy that is unmatched by anything Microsoft has produced to date. Samba also offers to the administrator a lot more control over how his/her machine responds to requests from Windows clients. Unavoidably, more control can mean more complexity.
Virtually everything that Samba does in a running environment is controlled by an ASCII configuration file called smb.conf. The exact location of this file may vary from one system to another, and in fact, can be specified by the command line used to start Samba. On most of the systems we build, smb.conf is stored in /etc/samba.d, along with some other files that may be required for a particular installation. Regardless of its location, smb.conf will contain numerous statements that will dictate exactly how Samba will behave, and how it will present the underlying filesystem and operating environment to Windows clients.
For those who are not comfortable with a shell prompt and/or editing
My intention here is not to teach Windows or Samba networking basics (several excellent books are available on that subject) but to instead illustrate a real-world example of Samba configuration running on a live system. So, what I have done here is present the (slightly edited) contents of smb.conf from our office file and print server (which happens to be a SCO OpenServer 5 unit), annotated with comments describing what the various statements accomplish. In writing this article, I have taken the liberty of assuming that you are familiar with how your server and network are configured, and know what terms such as subnet and broadcast mean. If tasks such as mounting filesystems and configuring IP addresses, netmasks and broadcast addresses make your brain swell and start to hurt, you're going to run into some difficulty in understanding the balance of this diatribe. If, on the other hand, all that technical tedious minutiae titillates your thought processes, please continue reading!
In the following text, smb.conf statements will be displayed in monospace blue text and my comments and assorted blathering will be appear in whatever the default font is for your browser. Reference to elements of smb.conf will also be in blue monospace font. The terms 95/98/ME and 2000/XP will refer to the two distinct versions of 32 bit Windows and the baggage each drags along for the ride. Little consideration has been given to Windows for Workgroups 3.11, as this software is way too old and lame to even be considered on a modern system. Ditto for NT 4.0 and earlier. Also, it should be understood that where "UNIX" is mentioned "Linux" may be substituted. Lastly, you should consult with the official Samba Team smb.conf definition as required. So here we go...
; =======================
; Samba Runtime Variables
; =======================
;
; The following variables may be used as substitutions in this file. They
; are organized by function and within each function, by alphabetical
; precedence.
Comments in smb.conf are typically preceded by a semicolon. I highly recommend that you thoroughly comment your smb.conf so as to record your intentions for posterity (and for the benefit of the jabroney who will take over when you leave for greener pastures). The presence of comments has little effect on Samba's performance, so be as wordy as necessary to convey your thinking. You are thinking, right? :-)
; client-specific...
;
; a client architecture, e.g., WinNT
; I client IP address
; m client NetBIOS name
; M client DNS name
Within its running environment, Samba understands a number of variables
%a, %I, %m and %M are unique for each client machine that has connected. This makes sense once you understand that each client connection is serviced by a separate Samba smbd daemon, which when started, itself reads smb.conf. Hence it is possible for Samba's behavior to vary depending on the client that has connected. For example, an external shell script can be executed when a client initially connects to perform various tasks (e.g., logging the date and time of the connection). The client's architecture (WinNT, Win98, etc.) can be passed into the script by using the %a variable as a command line argument, making it possible for the script to operate in ways that are specific to the Windows version in use.
; user-specific...
;
; g primary group associated with the UNIX username (%u)
; G primary group associated with the NetBIOS username (%U)
; H home directory associated with the UNIX username
; u UNIX username
; U NetBIOS username
Samba is all about effacing fundamental operating system differences between UNIX and Windows. For example, the owner, group and permissions concepts of UNIX have no exact equivalents in any version of Windows. Ditto for hard links, symbolic links, device files, etc. However, Samba has to work within the framework of the security features imposed by the UNIX kernel, which means that every Windows username (aka NetBIOS username) that is to be used with Samba must map to a valid UNIX username, who must, in turn, have a home directory somewhere in the file space accessible to Samba.
Further complicating matters is the concept of a NetBIOS workgroup (the workgroup parameter you define in Windows networking), which has no real analog in UNIX. In that regard, it is customary for one NetBIOS group to be defined for all machines. If your installation requires the use of multiple NetBIOS groups, you will need more than one server, since Samba can only be a member of one such group (which is also true of a real Windows machine).
;disk share specific...
;
; p NFS automounter path to share's root directory
; P share's root directory, not necessarily identical to the
; parameter passed in %p
; S share name as seen in NetHood
References to the above three variables are useful within smb.conf statements that deal with specific disk shares. In Windows-speak, filesystem resources on one machine that can be accessed by another are called shares (a kind of New Age, touchy-feely term for what we UNIX dinosaurs might call an NFS export). A Samba disk share exposes part of the underlying UNIX filesystem to Windows clients, to which they can gain access via normal Windows methods (e.g., Network Neighborhood -- NetHood -- or by mapping drive letters). As on a real Windows server, you would normally expose only a limited part of the filesystem in this way. You definitely would not want to expose areas like /bin or /etc to PC clients -- unless, of course, you like fixing busted systems.
The recommended practice is to set up a directory tree (possibly on a completely separate disk from the root disk, as I have done on our office server) and make everything under it the Windows filesystem. I usually refer to the root of this tree with a symbolic link named vfs ("virtual filesystem") so I can relocate the entire mess without having to change all references within smb.conf.
One of the interesting features of Samba is that it can map NFS exported filesystems into the virtual Windows filesystem that Samba creates for its clients. The p variable (that's a lower case 'p') tells Samba the local path that has been mapped to the particular NFS filesystem being shared (if a term such as automounter doesn't mean anything to you you need to read up on NFS in general). The P variable, on the other hand, points to the actual location of the exposed portion of the filesystem, that is, the UNIX pathname, such as /vfs/profiles. Note that a Windows client would not actually see /vfs/profiles in NetHood, but would instead see a share name such as PROFILES (if that's the name that was assigned).
; printer share-specific...
As with disk resources, you can expose your UNIX printers to Windows clients. The mechanism by which this is done will be described later when we get to the printer shares part of smb.conf. References to the above four variables are useful within smb.conf statements that deal with specific printer shares.
; d
current server process ID -- different for each connected user
; h server's DNS host name
; L server's NetBIOS name
; N NIS home directory server, if defined
; v Samba version, e.g., 3.0.2
With the lone exception of the %d variable, the above variables define information that is specific to the host machine on which Samba is running. %d defines the UNIX process ID (PID) on a per daemon basis.
; miscellaneous...
;
; R negotiated SMB protocol level
; T current date & time
; $(v) shell environment variable, e.g., %$(PATH)
The above are self-explanatory. %R is seldom used in the normal course of events but could be useful for debugging certain types of network problems that you hopefully will never encounter. Incidentally, if you refer to shell variables in smb.conf make sure that they are properly defined by whatever starts Samba. Otherwise, Samba's error log will quickly fill up with possibly cryptic complaints.
; GLOBAL DEFINITIONS
;
[global]
The structure of smb.conf strongly resembles a typical Windows .ini file. Sections within the file are demarcated by bracketed words, such as [global]. The following statements are a series of assignments, with a parameter name on the left hand side of the expression and one or more parameters on the right hand side. As a general rule, case doesn't matter unless referring to specific UNIX resources (such as directory names), where, as always, case does matter.
The [global] section refers to parameters that affect all aspects of Samba's operation. Some global parameters can be overridden by reassignments within share definitions. This feature allows you to define defaults for most cases and then create exceptions for specific shares.
; basic server definitions...
;
browsable= yes
deadtime= 30
domain master= yes
local master= yes
netbios aliases= alarms-bcs
netbios name= unifismb
os level= 65
preferred master= yes
server string= BCS Technology -- Samba V%v
workgroup= bcs
The above (in the order given) tell Samba that by default all shares on this server should be visible in NetHood; idle Samba sessions should be killed after 30 minutes if no files are opened; this server is the Windows domain master browser; it is also the local master browser (which would matter if other Samba and/or Windows machines are interested in assuming that role); this server responds to the NetBIOS names alarms-bcs (an alias I use with an alarm clock feature on our system) and unifismb (its primary NetBIOS name); that of all the machines on the network, this one has the highest probably of winning browse master elections -- we've "rigged" the elections by setting the os level real high; this machine is the preferred master browser if more than one such machine exists on the subnet; the phrase BCS Technology -- Samba V%v will appear as a comment associated with this server when viewed through NetHood (where the Samba release version will be inserted by the %v variable); and that the NetBIOS workgroup is bcs., which is also the name of the Windows domain supported by this system.
Speaking of browsing, note that only NT/2000/2003 and XP professional Windows versions are capable of being domain members and master browsers. 95/98/ME can execute a domain log-in but cannot act as a domain member or master browser. XP home edition is totally unsuitable for client-server environments, something that you should keep in mind when you get ready to buy a new PC. Actually, my opinion of XP home is that it isn't suitable for much of anything, except for dumbing down the machine to the intellectual level of a trained gorilla and executing the worm de jour. :-)
; network configuration...
;
bind interfaces only= yes
hosts allow= 127.0.0.1 192.168.100.
interfaces= net1
keepalive= 180
name resolve order= wins host lmhosts bcast
socket options= TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384
Here's where things get a little more complicated. The particular machine from which this smb.conf was copied is multi-homed, with one of the two network interfaces directly routed onto the Internet with a static IP address. I don't want Samba exposing anything to the outside world, so I added the first, second and third statements to limit the scope of Samba's connectivity. The first statement tells Samba to not automatically scan the system for network interfaces on which to advertise services, which it would do by default. The hosts allow statement defines the subnets and machines from which I am willing to allow connections to Samba. In this case, anything on the (internal) 192.168.100 subnet is acceptable. The local host loopback address (127.0.0.1) is required because I told Samba not to go looking for interfaces to link up with. As is the case with the majority of UNIX applications that utilize networking, Samba will not correctly function if it cannot communicate with the loopback device.
interfaces = net1 tells Samba which of the machine's network interfaces through which it is to communicate -- implying that any other interfaces present in the server should be ignored. Without this specification, Samba would listen for connections on all both network interfaces, which would be contrary to the hosts allow statement. Such an ambiguous condition would undoubtably populate Samba's running log with complaints about unauthorized attempts to connect as various and sundry external machines probed the server for a way in. Absent these parameters, Samba will accept connections from any interface, and from any client, subject to further authentication rules that you might implement. Incidentally, you could specify an IP address in place of the net1 parameter and further qualify that address with a CIDR (Classless Inter-Domain Routing) suffix. For example, interfaces = 10.1.2.10/29 would tell Samba, in effect, to accept traffic from the 10.1.2 subnet, with masking to 29 bits (i.e., 255.255.255.248 would be the subnet mask). Naturally, such a subnet would have to be accessible to the machine -- you'll quickly find out if it isn't when you go to start Samba!
keepalive = 180 tells Samba to send keepalive packets to each client at 3 minute intervals. This is a mechanism that allows Samba to detect when a client goes down and is a required parameter with SCO OSR5. With other UNIX flavors the SO_KEEPALIVE socket option can be used to achieve the same result. Without periodic keepalive checks, Samba smbd daemons might continue to uselessly run for a while (see deadtime above) if the clients they were serving crashed or otherwise unceremoniously went offline, thus wasting server resources.
The name resolve order = wins host lmhosts bcast parameter tells Samba how to resolve NetBIOS machine names to IP addresses. In this example, Samba would first consult its WINS (Windows Internet Naming Service) database, which it builds from information sent by clients as they go on-line. If that failed, Samba would next do an nslookup operation (i.e., consult with DNS if configured), followed by a lookup in the lmhosts file if present (seldom used anymore on most systems). If all else fails, Samba will try to achieve resolution by resorting to subnet broadcasts.
socket options = sets network socket options that Samba may negotiate with the kernel when establishing network connectivity. The available options and parameters are somewhat system-dependent. The above socket options were ones that I arrived at by experimentation in an effort to optimize performance. If you go too far astray with the SO_RCVBUF and SO_SNDBUF values, you may either hurt performance due to excessive packet fragmentation or exceed a system-defined limit on the maximum permissible packet size, the latter which may prevent Samba from starting. The 16384 values are a good balance between avoiding fragmentation and making best use of available network resources. Your system's mileage may vary.
; default share settings...
;
hide dot files= yes
kernel oplocks= no
level2 oplocks= no
map archive= no
map hidden= yes
map system= no
oplocks= no
wide links= no
The above parameters are defaults that apply to all disk shares that are defined later in smb.conf. In UNIX, a file whose name begins with a . (e.g., .profile) is normally invisible to regular users in ls listings. The hide dot files parameter tells Samba to honor that feature, causing such files to behave as though the Windows hidden attribute has been applied.
Windows has a feature called opportunistic locks (oplocks) which is touted as a performance enhancement in situations where one workstation would lock all or part of a file as a prelude to updating the file's content. The theory is that the station that locked the file could relinquish the lock in specific situations where another workstation needs short-term access to the same file (usually a passive read operation). Unfortunately, oplocks are not entirely trustworthy and some Windows applications will choke on them. Therefore, I never enable oplocks because I can never be sure that all installed software on all clients will work with them. The kernel oplocks parameter is for use only with those UNIX versions that support them (OSR5 doesn't). Do not enable them unless you are sure they are supported, and only if you enable oplocks and level2 oplocks. My advice is to steer clear of oplocks.
The three map... parameters allow you to map Windows filesystem attributes into the UNIX environment. UNIX directories and files do not have equivalents to any of these attributes, so what Samba does is map them onto various permission bits associated with the file. When viewed on the UNIX side, files with such mappings will appear to have odd combinations of permissions. You should enable these features if you wish to copy Windows files to the server and retain these specific attributes.
The wide links parameter tells Samba whether Windows clients can follow symbolic links that lead outside of the scope of defined disk shares. For example, with wide links off, a symlink in a disk share that points to, say, /etc/ntp.conf would not be honored by Samba. With wide links on, the symlink would be valid for Windows access, which could be hazardous to the health of the file in some cases. I do not recommend the enabling of wide links on any production Samba server.
; user & machine access configuration...
;
domain logons= yes
encrypt passwords= yes
guest account= vfs
guest ok= no
invalid users= daemon bin sys adm uucp nuucp auth \
asg cron sysinfo dos mmdf network \
backup nouser listen lp audit
valid users= dave kenna stan steggy vfs
The above parameters define some of the essential elements of user connectivity. The domain logons and encrypt passwords parameters say that this machine will authenticate Windows users logging into the bcs Windows domain (which was defined earlier), using encrypted passwords (encryption is a bit of a misnomer: Windows passwords are actually repeatable hashes and are not all that difficult to crack). Also implied by these parameters is that a Samba password database must have been created so as to authenticate users. You can allow clear text passwords if you choose by setting encrypt passwords to no, which is not recommended, but may necessary with older versions of Windows (e.g. Windows 95 OSR 1.0) that do not support the hashing of passwords.
Speaking of user access, 2000/XP has what is known as a "guest account," which is basically an unauthenticated user who is granted limited access to the system. You can do likewise in Samba by assigning a Samba user with the guest account parameter and by setting guest ok = yes. Printing usually requires that a guest account statement be present. Note that although I defined a guest account, I explicitly disabled guest access because our office system is a Windows domain that requires appropriate user authentication. If you decide that guests are okay, be careful to limit their access to system resources. Well-intentioned but clueless users can break things for you, and you will be just as clueless trying to figure out who did what to what.
On all UNIX systems there are a number of user accounts that are associated with various administrative functions that may have greater privileges than what you are willing to extend to Windows users. You do not want to ever give such accounts access via Samba, which is the purpose of the invalid users parameter. This is especially important if the server is exposed to the Internet. If a username is present in this statement, that user cannot connect, even though their username is later specified in a valid users list. Incidentally, the above invalid users example illustrates how to continue a logical line onto several physical lines. This is a handy mechanism for use when the number of parameters associated a statement is large.
The valid users statement specifies which Windows users are allowed to connect to Samba. This parameter can be used to limit access to a Samba server when it is on the same subnet as another Samba or Windows server. Within share definitions, this list can be overridden by a more restrictive list, which will be illustrated later on. Absent a valid users list, any user could theoretically connect to this machine unless their username appears in the invalid users list. The general rule is that if a user is not listed in the invalid users list and there is no valid users list, then that user will be granted access rights, subject to authentication.
;; logon drive maps the given drive letter
to the user's home directory on
;; WinNT/2K/XP clients -- it has no effect on Win95/98/ME...
;
logon drive= p:
;
;; logon home is for storing Win95/98/ME user profiles, which are further
;; organized on a machine-by-machine basis...
;;
logon home= \\%L\%u\%$(WINPROF)\%m
;
;; logon path is for storing WinNT/2K/XP user profiles, which are further
;; organized on a machine-by-machine basis...
;;
logon path= \\%L\%$(WINNTPROF)\%u\%m
;
;; a separate log-in script (batch file) is created for each user on this
;; system...scripts can be customized as required...
;;
logon script= %$(SMBUSERSCRIPTS)\%u.bat
The above statements define how Samba will behave at the time of a valid user log-in. During the domain log-in sequence Samba can be configured to provide various services that establish a desired degree of connectivity. The above parameters specify what those log-in services will be. logon drive = p: defines the drive letter that will be mapped to the user's home directory share when s/he logs on to a 2000/XP machine. To establish the same relationship with a 95/98/ME unit it is necessary to insert a net use p: /home statement into the user's log-on script (batch file). Incidentally, you are free to use any drive letter than has not already been assigned in Windows. I use p: because each user's home directory is "private" storage that only s/he and the system administrator (root) can get into.
Despite what it might suggest, logon home = \\%L\%u\%$(WINPROF)\%m has nothing to do with the user's home share. What it does is tell Samba where to store 95/98/ME roaming profiles (the information Windows generates for each user about the desktop layout, Internet Exploder...er...Explorer cookies and history, etc. -- logon home is ignored by 2000/XP machines). At log-on time, the client machine, if 95/98/ME, will retrieve the roaming profile from the server and use it to present the user interface. logon home is an example of using Samba and shell variables to tailor the statement to the specifics of each user's environment.
Recall from above that %L is the Samba server's NetBIOS name (unifismb in this case), %u is the user's UNIX username and %m is the client machine's NetBIOS name. On our office system, WINPROF is a shell variable that specifies the subdirectory under the user's home directory where 95/98/ME roaming profiles will be stored (WINPROF is defined in a custom configuration file called /etc/default/samba, which I use to externalize some aspects of the Samba environment for easier adaptation to different systems). When Samba parses \\%L\%u\%$(WINPROF)\%m, it will substitute the relevant values for each variable and assign the result to the logon home parameter. If, for example, the log-in user is tony, the roaming profile subdirectory is .winprofile and the client machine on which Tony is working is bcsa0001, the statement will expand to logon home = \\unifismb\tony\.winprofile\bcsa0001.
It is essential that you generate a separate profile subdirectory for each 95/98/ME client so as to preserve the specifics of each machine's configuration, especially the desktop layout. If you make the (common) error of using one subdirectory to store this user's roaming profile, e.g., \\unifismb\tony\.winprofile -- no client machine specified in the path, the retrieved profile will be that of the last machine on which Tony was working. If Tony had been working on bcsa0001, logs out and then logs on to bcsa0002, he will see bcsa0001's environment, which, unless both machines are identically configured and have identical copies of the same software, would be wrong (and could possibly result in a major mess).
logon path = \\%L\%$(WINNTPROF)\%u\%m is to 2000/XP machines as logon home = \\%L\%u\%$(WINPROF)\%m is to 95/98/ME boxes: it defines where 2000/XP roaming profiles are to be stored. The principles of parameter substitution apply as they do with logon home. WINNTPROF is another shell variable that is defined in my /etc/default/samba configuration file.
Following authentication, Windows will attempt to execute a log-on script (aka batch file -- MS-DOS mumbo-jumbo stored in a text file on the server) as specified by the logon script = %$(SMBUSERSCRIPTS)\%u.bat statement. Log-on scripts may used to set up the client machine's environment, such as drive letter and local printer mapping, sync the workstation's clock to the server's, etc. Here's an example of such a script:
rem @echo off
net time \\unifismb /set /yes
net use lpt2: \\unifismb\hp2250_00 /yes
net use lpt3: \\unifismb\oki395 /yes
net use h: \\unifismb\apps
net use p: /home
net use r: \\unifismb\public
net use s: \\unifismb\shared
rem pause
I won't go into any detail as to what these statements mean, except to explain that, in theory, most anything that can be executed in a standard MS-DOS batch file could be used in a log-in script. By the way, although log-on scripts are stored on the UNIX filesystem they must be formatted as DOS-compatible files. In other words, each line of text must be terminated with an ASCII carriage return (<CR>) and ASCII line feed (<LF>), which is not the same as the UNIX convention of terminating lines with <LF> alone. Be aware of this requirement if you decide to use a UNIX text editor such as emacs or vi to create log-on scripts. Otherwise, you will find yourself chasing your tail trying to figure out why your scripts aren't working. Within the vi editor you can insert a <CR> by typing Ctrl-V and then typing Ctrl-M or the Enter key.
Important to note is that the initial Windows search path for log-on scripts is hard coded into the form \\<server_NetBIOS_name>\netlogon (netlogon is a special disk share that will be discussed later on). Here again, parameter substitution may be used to tailor the statement to the user who is being authenticated. Our friend Tony, upon logging in to our unifismb Samba server, would probably see a brief "console" window appear on his desktop, in which statements read from \\unifismb\netlogon\smbscripts\tony.bat would execute and any output caused by those statements -- including errors -- would appear. The smbscripts component of the path is derived from %$(SMBUSERSCRIPTS), which is also defined in my custom configuration file, /etc/default/samba.
Hint: debugging log-on script errors can be somewhat difficult because the console window normally closes as soon as all script statements have been executed. If you need to "single step" a script consider placing a pause statement in appropriate locations in the script, or a single pause at the very end to hold open the window so you can read any errors that were generated.
min passwd length= 4
null passwords= no
smb passwd file= /etc/samba.d/smbpasswd
security= user
username map= /etc/samba.d/smbusers
The above statements define some additional security parameters that Samba will use while authenticating users. The first and second entries are self-explanatory. Unless your system has very limited access, you do not want to allow null passwords, that is, allow users to not have to use a password to get in. min passwd length and null passwords do not conflict with each other: if a minimum password length is stipulated but null passwords are acceptable, a user could get in with no password, but if a password were used, it would have to be at least min passwd length characters.
smb passwd file = /etc/samba.d/smbpasswd and username map = /etc/samba.d/smbusers tell Samba where to look for the files that identify authorized users. smbpasswd stores usernames, their corresponding UNIX UID's and the hashes of their passwords, while smbusers maps NetBIOS usernames to UNIX equivalents when there are differences between the two (smbpasswd also stores details about machine trust accounts that are associated with 2000/XP clients that are part of the Windows domain).
The security of Samba itself is no better than the security given to these two files. Whatever directory you choose to store these files, be sure to set its ownership to root:sys and permissions to rwxr-x--x. smbpasswd should have rw------- permissions and smbusers should be set to rw-r--r--. Both files must be owned by root:sys as well. I cannot overemphasize how important this is. Without proper protection of these files a malicious user could, for example, hand-enter him/herself into the smbpasswd file and thus become an authorized Windows user.
username map = /etc/samba.d/smbusers accounts for a sticky configuration issue that arises when a user's UNIX and NetBIOS usernames are different. Here's a simple one-line smbusers example:
gwash = "george washington"
This entry tells Samba that Windows user george washington (George's NetBIOS username, which is surrounded by double quotes so Samba will not think that it is really two names) is actually UNIX user gwash. Hence you may map long-winded NetBIOS usernames such as Bill Clinton to more succinct UNIX equivalents, such as hillary. If smbusers is present but a specific Windows user is not mentioned within, that user's UNIX username will be assumed to be the same as his NetBIOS username. In all cases, UNIX directory and file ownerships map to UNIX usernames, not to NetBIOS usernames. Be careful to distinguish between the two means of refering to a user.
Lastly, security = user means that access to Samba-controlled resources will be granted or denied on a user-by-user basis, which is the default for all current Windows versions. Only the oldest versions of Windows are incapable of functioning at the user security level (and thus should be avoided).
; logging...
;
debug timestamp= yes
;
;; uncomment next entry to generate a log for each client machine
;;
;log file= %$(LOGDIR)/%m.log
;
log level= 0
max log size= 50
syslog= 0
Samba has extensive logging capability that may be used to keep track of how well the server is running, as well as to identify specific problems that may appear for any number of reasons. The first statement tells Samba to prefix each log entry that it generates with a date and time stamp, which is essential to debugging errors that are sporadic in nature. The following log file = %$(LOGDIR)/%m.log statement tells Samba to generate a log for each client machine that connects to the server. I place this entry in every smb.conf that I prepare but leave it commented out unless I need machine-specific details. Use this feature sparingly to prevent a bunch of huge logs from choking out your filesystem. You could also edit the %m parameter to a specific client name (e.g., bcsa0001) to log only one machine's activity. The %$(LOGDIR)parameter also comes from my custom /etc/default/samba configuration file and usually points to /vfs/logs.
The log level tells Samba how much detail should be logged. log level = 0 says only log items of significant interest, errors in most cases, or unauthorized attempts to connect. As you increase the log level the amount of detail in the log files will likewise increase. Any log level higher than 3 will increase the level of detail to the mind-numbing category -- such information is primarily of value to Samba developers.
The max log size = 50 parameter specifies how much any Samba log will be allowed to grow in kilobytes. When a log reaches max log size kilobytes it will be renamed and a new log will be started. syslog = 0 states that Samba should not make entries into the server's syslog. If you wish to have Samba write log data to syslog change syslog = 0 to anything higher than 0. For example, syslog = 1 will map Samba complaints onto syslog's LOG_WARNING status level. As with log level, increasing the syslog value will increase the amount of verbiage recorded for posterity. I generally recommend against syslogging Samba complaints, as syslog gets enough junk as it is without Samba adding to the load.
;
resource discovery & naming...
;
wins support= yes
Essential to the correct operation of Windows networking is the presence of some mechanism that can map NetBIOS machine names on to IP addresses. The possible methods available with Samba are DNS lookups, WINS, LMHOSTS lookups and subnet broadcasts. If your server is expected to support both 95/98/ME and 2000/XP clients WINS is necessary for best performance and reliability. 95/98/ME machines cannot use DNS at all and LMHOSTS files are a royal pain to maintain (there has to be an up-to-date copy of LMHOSTS on every machine). Absent DNS, WINS and LMHOSTS, all machines will resort to subnet broadcasts, which are grossly inefficient, and unreliable. Also, broadcast packets usually cannot jump through routers, which means that resource disccovery will not function across subnets unless WINS or an LMHOSTS file is used.
In order for WINS to work there has to be an accessible WINS server somewhere in the domain. Adding to the complication, a Windows or Samba server acting as a local master browser has to be present on each subnet (thanks to Microsoft's greediness) to support cross-subnet browsing. One of these servers can also act as the WINS server -- be sure to pick the most reliable machine to perform this critical function.
Samba is capable of acting as a WINS server, a feature that you may enable with the wins support = yes statement. wins support = no will disable WINS on this server, thus requiring that another machine on the network handle name resolution requests. In such a case, a wins server = <ipaddr> statement must be entered into smb.conf to tell Samba where to direct its WINS discovery queries (where <ipaddr> is the IP address of the WINS server). Note that wins support and wins server statements are mutually exclusive: Samba will complain at startup if both statements are present in smb.conf.
;miscellaneous...
;
max smbd processes= 25
message command= /usr/local/bin/setalarm %f %s
printing = sysv
read raw= yes
time server= yes
The above global parameters fall into the "everything else" category. One of the problems frequently encountered in Windows networks is resource hogging. With Samba, each connected client starts a new instance of the smbd daemon, thus consuming some percentage of the available machine resources. If your Samba server is also expected to support other UNIX activities you should include a max smbd processes = n statement to limit the number of Samba daemons that can be concurrently running. Without this statement, Samba will spawn as many new daemons as required to support all connections. In some cases, a runaway condition can occur that could take down the server. Avoid setting this value any higher than necessary to support the maximum expected load (one smbd daemon per connected client plus the "master" daemon) plus two or three "spares."
The message command = statement tells Samba what to do when it is the target of a WinPopUp message. By default, Samba will not respond to a WinPopUp, which will cause the sending client to report an error (except Windows for Workgroups 3.11). You can use the message command = statement to define a specific action that your server should take when it receives a WinPopUp. A guest account must be defined (see above) and must be assigned to a UNIX user who actually exists and has some access privileges.
On our office server, it is possible for users to set an alarm by sending a specially formatted WinPopUp message to the server. In the message command = /usr/local/bin/setalarm %f %s statement, the %f parameter identifies the sending user and the %s parameter identifies the name of a file that contains containing the message -- this file is created by Samba in response to the WinPopUp. The setalarm shell script parses the file identified by %s and uses the information within to set up the alarm as an at job. When the at job is finally executed the server will find out where the user is logged in (it uses a script that parses the output of Samba's smbstatus utility) and will send a WinPopUp to him or her announcing the alarm.
If you decide to implement a message command function on your server, be sure that whatever is executed immediately returns control -- in other words, execute the program asynchronously. Otherwise, the WinPopUp function on the client machine will stall until message command has finished.
printing = sysv tells Samba the type of print subsystem supported by the operating system. Acceptable definitions are BSD, AIX, LPRNG, PLP, SYSV, HPUX, QNX, SOFTQ, and CUPS. printing = sets a default for use by any subsequent printer shares that do not specify another method of printing. In this case, printing = sysv says that absent more specific information, Samba should submit and control print jobs using AT&T UNIX System V style commands and options. There is no default for this statement, so something should be present somewhere in smb.conf to tell Samba how to handle printing.
read raw affects overall network performance as related to how Samba processes SMB read data requests. read raw = yes allows any given SMB packet to be up to 64 KB in size, which will generally produce the best throughput on most systems. Unless you are running a WWG 3.11 client on your system you should not change this setting (it defaults to yes).
Lastly, time server = yes tells Samba to act as a time source for any Windows client that asks for the date and time (the proper MS-DOS syntax is net time \\<server /set /yes). If you enable Windows time services be sure to configure ntpd to keep the server's clock accurate. Naturally, ntpd is of limited value if the server has no Internet access.
; ======================
; Disk Share Definitions
; ======================
A large part of what Samba does is expose a portion of the UNIX filesystem to Windows clients. On the client side, the filesystem looks just as it would on a real Windows machine, save that some of the bizarre behaviors for which Windows is famous won't be evident. There are some minor differences that are an unavoidable part of mapping UNIX file space over to Windows, but in practice, these differences usually are not a cause for concern.
In order to allow Windows clients to read and write on the UNIX filesystem it is necessary to define one or more disk shares in Samba, a process that is analogous to setting up shares on a real Windows server. Most shares that you might define will be for general purpose use by all users of the system, while a few shares will be available only under specific conditions. In all cases, each share definition starts with a bracketed name that will appear as the share name in NetHood, such as [shared] or [public] (with three exceptions, all disk share names are arbitrary). Following the bracketed name will be a series of statements that will tell Samba what it needs to know to properly present the share to clients. The only statement that is absolutely necessary is the one that indicates what part of the UNIX filesystem is being shared. Other statements may be added as needed to control share access or to define specific actions that should occur when a user connects to or disconnects from the share.
For the purposes of discussion, I broadly classify disk shares as static and general. Static shares define resources that may be needed to support basic Windows networking functionality -- they have analogs on a real Windows server and must be present if the Samba server is to act as a primary domain controller. General shares support whatever functionality is required by the local installation and may be added or subtracted as needs change.
; static share definitions...
;
[homes]
comment= %U's private storage
browsable= no
create mask= 0600
directory mask= 0700
path= %$(VFSDIR)/private/%u
root postexec= %$(SMBSCRIPT) O %m %I %G %u %U %P
root preexec= %$(SMBSCRIPT) I %m %I %G %u %U %P
volume= %U
writeable= yes
csc policy= disable
The layout of the above definition is typical for all shares. The definition starts with its name, [homes] in this case. Samba will assume that all statements that follow the share name are part of the share definition until a new bracketed name is encountered or EOF has been reached. [homes] is one of the special Windows shares that is expected to be found on the PDC. It defines the per user private server storage that is (should be!) available to each authenticated user. Without a [homes] share, a log-on script statement such as net use p: /home will not work -- there wouldn't be anything to call home.
Following the share name is a comment, which can be any reasonable text. The comment will appear in NetHood when the server is browsed. In the above comment, the %U variable will be replaced with the user's NetBIOS name. Comments are not required but are a good idea, especially for shares that are generally accessible. Without a suitable comment, users might be forced to browse the share to figure out what it contains. Obviously, the comment should make sense to your users -- try to avoid cryptic computer mumbo-jumbo!
The balance of the statements define the share's characteristics.
The create mask = and directory mask = statements define how UNIX filesystem permissions will be applied to newly created files and directories controlled through this share. The perms are exactly as you would specifiy with the chmod command and in this case, would create new files with rw------- perms and directories with rwx------ perms. As this is private storage, you must restrict access only to the owning user.
The path = statement identifies the part of the UNIX filesystem that is mapped to the share. The statement path = %$(VFSDIR)/private/%u is special in that it will vary depending on the identity of the connecting user. Recall that each client connection spawns a new instance of the smbd daemon and that each daemon reads smb.conf in its entirety at start up. Therefore, the [homes] share will map to a different part of the private subdirectory, based upon the UNIX username of the connecting user. If VFSDIR is defined as /vfs and the user's UNIX name is kenna, the path will be /vfs/private/kenna and the displayed share name will be kenna (not homes, as one might think). If the log-on script for this user includes the statement net use p: /home, the p: drive on kenna's PC will be mapped on this server to \\unifismb\kenna. Naturally, the directory should be accessible only to user kenna.
The root postexec = %$(SMBSCRIPT) O %m %I %G %u %U %P and root preexec = %$(SMBSCRIPT) O %m %I %G %u %U %P statements specify actions that are to occur on the server when the user disconnects and connects, respectively, to his/her home share. SMBSCRIPT (defined in my /etc/default/samba configuration file) defines the program that will be executed as root (with all the risks that that entails). This is another custom script that I have developed to automate some new user server-side tasks, as well as to maintain a log of each user's comings and goings. It makes use of several Samba variables to adapt its operation as needed.
The volume = %U statement is used to define the volume name that will be seen when an MS-DOS style directory listing of the share's root is displayed. It affects the Volume in drive x is part of the directory listing. If the user's NetBIOS name is tony and the mapped drive letter is p: tony will see Volume in drive P is tony.
The writeable = yes statement means what it says: the disk space assigned to this share is treated as read/write by Samba, subject to the underlying UNIX permissions. Note that if the disk space is set to read only in UNIX, Samba will treat it as read only, even in the presence of a writeable = yes statement. writeable = no would make the share read only, even though UNIX permissions have no such restriction. An alternate form of writeable = no is read only.
The csc policy = disable statement controls whether clients may cache file contents associated with this share (csc means client-side caching). Caching and oplocks have often been blamed for mangled files and for that reason, I normally disable this feature (as well as oplocks). I just can't see how a modest performance improvement can be justification for possible bizarre behavior. In addition to disable, options are manual, documents, and programs.
[netlogon]
comment= network services
browsable= no
create mask= 0640
directory mask= 0750
path= %$(NETLOGONDIR)
root preexec= %$(SMBSCRIPT) S %L "" %g %u %U %P
volume= netlogon
writeable= no
csc policy= disable
netlogon is the second of the three static shares. If this machine is to support domain log-ons this share and the. Following initial user log-on, the Windows client will look in this share for a log-on script as defined by the earlier logon script= %$(SMBUSERSCRIPTS)\%u.bat statement in the global section. The search is relative to the netlogon share. Most of the statements for this share are similar to homes, except that it is made read only to protect it from casual tinkering.
[profiles]
comment= 2000/XP user profiles
browsable= no
create mask= 0600
directory mask= 0700
path= %$(VFSDIR)/%$(WINNTPROF)
volume= profiles
writeable= yes
csc policy= disable
The profiles share is the third static share. This is where 2000/XP roaming profiles are stored and is referred to by the logon path = \\%L\%$(WINNTPROF)\%u\%m statement in the global section. If this machine is to support domain log-ons this share and its. Be sure that each authorized user has a profile subdirectory in this share before allowing them to log on. That subdirectory must have the user's name, be owned by the user and have access limited to the user (rwx------ permissions).
;general share definitions...
The following share definitions provide general storage access to all clients, subject to rules that may be imposed on a per share basis. I will not go into detail on every item, as context and reference to items already covered will be sufficient information in most cases.
[apps]
browsable= yes
create mask= 0640
directory mask= 0750
force user= vfs
path= %$(VFSDIR)/apps
volume= shared apps
write list= vfs
csc policy= disable
In the above definition there are some access control features not previously discussed. force user = vfs means that the effective UNIX user for this share will always be vfs, regardless of the identity of the Windows user seeking access. On our system, vfs is a pseudo-user who has administrative rights on all workstations and is also the Samba administrator. Only vfs has write permissions on this share, which is specified by the write list = vfs statement. All other users have read only permissions.
[install]
comment= installation software
browsable= yes
create mask= 0640
directory mask= 0750
force user= vfs
path= %$(VFSDIR)/install
valid users= vfs
volume= install
write list= vfs
csc policy= disable
In this share, the more restrictive valid users = vfs statement prevents anyone except vfs from gaining access, which makes sense in this case because vfs is the administrator. Incidentally, if you need to add more than one user to such access lists, separate the names with whitespace and quote names that have more than one word (e.g., "system administrator").
[logs]
comment= runtime logging
browsable= yes
create mask= 0640
directory mask= 0750
force user= vfs
path= %$(LOGDIR)
valid users= steggy vfs
volume= logs
writable= yes
csc policy= disable
Here two users are allowed access to the logs share. Although writable = yes would seem to give everyone write permissions on the share, the valid users = steggy vfs statement keeps everyone out except users steggy and vfs.
[public]
comment= public read-only access
browsable= yes
create mask= 0640
directory mask= 1750
force user= vfs
path= %$(VFSDIR)/public
volume= public
write list= steggy vfs
csc policy= disable
[shared]
comment= public read-write access
browsable= yes
create mask= 0640
directory mask= 0750
force user= vfs
path= %$(VFSDIR)/shared
volume= shared
writable= yes
csc policy= disable
The above two shares are typical of general access storage that you might define on your system.
[cdr00]
comment= UNIX CD-ROM 00
browsable= yes
force group= bin
force user= bin
path= /cdrom
writeable= no
csc policy= disable
This is an example of how you could share the CD-ROM in the server. Of course, this share won't be valid unless the CD-ROM filesystem has been mounted onto the /cdrom directory. The bin:bin user and group mapping should work with most systems. The writeable = no statement assures that Samba understands that a CD-ROM is a read only filesystem. Incidentally, writable and writeable are synonyms.
; ==============
; Printer Shares
; ==============
Printer shares expose UNIX printing resources to Windows clients, allowing users to route their print jobs to a server-controlled printer. This sort of arrangement reduces the number of printers that have to be purchased and maintained, which can effect a sizable cost-savings over time.
The share definition syntax is similar to that for a disk share, except some statements that are specific to Samba printing are used. On the client side, the share is seen as a Microsoft network printer and is accessed as it would if it existed on a real Windows server. The client needs to have the appropriate drivers installed and configured. One client-side configuration setting that matters is spooling: it should be disabled in most cases, as the UNIX spooler will off-load that task from the client. The only reason you would enable client spooling would be if the UNIX print command associated with the share bypasses the lp spool service and writes directly to the device file for the target printer (which setup should be avoided if possible).
[hp2000_00]
comment= ink jet in sales office -- HP 2000c
printer= hp2000a
path= %$(VFSDIR)/spool/hp2000a
print command= /usr/local/bin/smblp %p hp2000a %L %m %s
printable= yes
In the above definition, the printer = statement defines the Windows printer name, which can be different than the share name. The path = statement defines the subdirectory on the server where clients will submit their print jobs. This directory should be owned by the guest user defined in the global section (vfs on our system, who is also the administrator), have same UNIX group as the system's Samba user and should have rwxrwx--T permissions. On our system, I have separate spool paths for each printer, primarily as an aid to debugging printing problems.
As the client machine generates the print job, the output will collect in the location defined by path =, with the client assigning a unique filename. When the job has been fully spooled, Samba will execute the optional print command to drive the target printer. If no print command has been defined Samba will use standard lp subsystem commands to drive the printer, the particular commands having been determined by the printing = statement in the global section. On our system, I use a custom script (smblp) that in addition to driving the printer, can also send the user a WinPopUp in case the print job encounters an error. The %s parameter to smblp is the name of the client-generated spool file.
Lastly, printable = yes simply means this printer share is enabled and users can print to the associated printer. Not including this statement in the share definition or saying printable = no will make the associated printer inaccessible to everyone. Speaking of accessibility, all of our office printers are accessible to all users, which works well for many situations. However, your needs may differ, especially if your environment includes printers designed for a specific purpose. For example, if you have a high-end color laser printer for printing advertising matter, you might not want the lady down the hall using it to print pictures of her family, cat, dog and the neighbor's pet monkey. You can restrict access to a select few users by adding a valid users statement to the appropriate printer share. The syntax is identical to that for a disk share.
After editing smb.conf always check it for errors with the testparm Samba utility. Doing so may save you a lot of grief, as errors caused by improper smb.conf statements can be exasperating to debug, especially if you are new to Samba. Finally, there are many more smb.conf statements available than I mentioned herein. If you have a specific requirement that wasn't covered in this article you need to visit the Samba website and peruse their documentation. Did I also mention books? Check out the Samba site for recommendations. Before you know it, you'll be on your way to becoming a Samba advocate when you see what it can do for you and your | http://aplawrence.com/Unixart/samba-guide.html | crawl-002 | refinedweb | 9,218 | 59.13 |
Build a DialogFlow template fulfilment response in Python
Budget $20-50 AUD
Hi
I would like a template python program that builds a response to the Google DialoFlow request.
The template should contain responses for:
- Simple text responses
- card responses
- Suggestions
- Links out
- Lists
The responses should be able to have combinations of the above.
The card response should be as fully featured as possible (i.e. Headers, images, links etc.)
The template should be easy to select which elements (card, suggestions etc.) I want and to then just add the text to display, image URL's or Link Url's. After I have added the text, images, url's etc. the program should build the (valid) JSON response that can be handled by the Google DialogFlow agent.
The target for the ChatBot is a Google Hangout.
It should be based on the Flask framework.
i.e.
# default route
@[login to view URL]('/')
def index():
return 'Hangout Chat Bot!'
# create a route for webhook
@[login to view URL]('/webhook', methods=['GET', 'POST'])
def webhook(request=None):
# print("********** Received Message ************")
return results()
Thanks,
Jesse.
8 freelancers are bidding on average $39 for this job
Hi There! I have more than 8-year experience in this field. Would you please share more details about the project? I am really interested to work with you for long. Best Regards, Santosh | https://www.freelancer.com/projects/python/build-dialogflow-template-fulfilment/ | CC-MAIN-2019-51 | refinedweb | 227 | 75.5 |
I've got to add a bunch of members to a class. I would like to add a macro such as this:
#define MEMBER(TYPE,NAME) \
private: TYPE m_##NAME; \
public: TYPE get##NAME() const { return m_##NAME; } \
public: void set##NAME(TYPE in##NAME) { m_##NAME = in##NAME; }
and then use it to add the members to the class:
class foo {
MEMBER(std::string, OutputDir);
MEMBER(int, MaxIterations);
MEMBER(double, OptimizationCutoff);
// And a couple dozen more members...
public:
// The rest of the class declarations
};
The intend is obviously to shortcut multiple instances of code like this, 3 lines for each of the multitude of class members:
private: std::string m_OutputDir;
public: std::string getOutputDir() const { return m_OutputDir; }
public: void setOutputDir(std::string inOutputDir) { m_OutputDir = inOutputDir; }
Are there any consideration against coding like that?
Firstly, I'd point out an observation. The macros you've defined here seem to be completely useless. I get what you're trying to do -- with one line of code declare a member variable and simple getter and setter methods.
I would counter: if you're just exposing these member variables with simple getters & setters, why not just make the member variable
public and be done with it?
Are there any consideration against coding like that?
Yes, all the usual concerns about Evil misuse of macros. Let's go through a few as they pertain here.
Code that looks like this:
class foo { MEMBER(std::string, OutputDir); MEMBER(int, MaxIterations); MEMBER(double, OptimizationCutoff); // And a couple dozen more members... public: // The rest of the class declarations };
at a glance might be somewhat obvious in it's construction, but there are many details that are obscured when it comes time to maintain or extend either
foo, one of the members, or the macro itself.
For example, in just glancing at this I get the idea that you are declaring a member
OutputDir, but there's really no indication if it's a data member or a member function. If it's a member function, what is the return type? What are the parameters? How would I declare a member function template? What are the template parameters?
You have built a syntax here that might save a keystroke or two when first typing the code, but can generate hours and hours of frustration and head-banging when someone needs answers to any of these questions. Since it isn't documented or supported by anybody but you, your macros end up being akin to a secret language that only you know.
When degugging code that uses a macro, you see the substituted text -- not the macro or how it was called. This can be very confusing.
Macros are a brute-force text substitution tool. They do not respect any namespace or scope, and are universally applied everywhere they are defined and invoked. This steps around the C++ type system completely and lets your write code that would otherwise evoke an error.
A common example is the definition of the
min and
max macro by VisualStudio, which clobbers the functions by the same name in the Standard Library. Side effects can be bizarre and, as mentioned, very difficult to debug. | http://m.dlxedu.com/m/askdetail/3/ac1efca74166d47dc13eb8e7af46e910.html | CC-MAIN-2018-22 | refinedweb | 531 | 60.55 |
30 October 2008 07:56 [Source: ICIS news]
(Adds details throughout)?xml:namespace>
SINGAPORE (ICIS news)--The world's largest chemicals company BASF has posted a 37.5% plunge in its third-quarter net profits to €758m ($591.24bn) due to high raw material costs and hurricane-related losses, the company said on Thursday.
“The impact of the global financial crisis on the real economy is speeding up and hitting harder,” said Jurgen Hambrecht, chairman of the board of executive directors of BASF.
Sales rose 13% for the company’s third quarter ended 30 September to €15.78bn on substantial price and volume increases, while operating income tumbled 10.7% to €1.51bn, from €1.69bn year on year, it said.
BASF said hurricanes Gustav and Ike, which ravaged US Gulf coast chemicals facilities, had shaved more than $100m (€78m) off its earnings for the period.
It had also incurred significant expenses from hedging naphtha purchases against increasing prices that came to nothing as oil prices plunged towards the end of the quarter.
Net profits for the nine months to 30 September, however, only dipped marginally to €3.23bn, from €3.27bn year on year, while company sales for the same period rose 11% to almost €48bn, from €43.25bn.
The company reported an across-the-board rise in sale volumes for all its divisions, with the oil and gas segment leading the increase with a 46% rise, while the chemicals and plastics segments recorded relatively modest rises of 19% and 4% respectively.
Looking forward, Hambrecht predicted an extremely challenging business environment for the company.
“The decline in demand in important markets, stockpiling by our customers and the fall in oil prices are all signs of a recessionary trend that is likely to sharpen in 2009,” he said.
The company now expected global economic growth of below 2.7%, chemical production growth of below 2%, average Brent crude of $105/bbl in 2008 and an average $1.45 per euro exchange rate, in addition to volatile raw material costs and risks in a further economic downswing.
Under the revised expectations, BASF had also shifted its outlook for the full year and expected to increase sales in 2008, while making every effort to “match” the previous years’ excellent earning before interest and tax before special items, it said.
($1 = €0 | http://www.icis.com/Articles/2008/10/30/9167413/basfs-q3-net-profits-fall-37.5.html | CC-MAIN-2013-48 | refinedweb | 389 | 54.52 |
We are rendering json into data-attributes in ERB. Eg:
location.html.erb
<div data-
location_json.erb
<%= @location.to_json(
only: [:id, :name, :lat, :lng],
method: [:display_name]) %>
The rendered html looks like:
<div data-
But I would like it to be more readable, eg:
<div data-
Is there a way to tell ERB or to_json to render with single quotes instead of escaping double quotes?
Is it possible to render readable json into HTML data-attributes. By readable, I mean with the bare minimum of escaping.
in
location.html.erb
<div data-
Reference:
Any gsub.(/'/, '') will escape quotes too.
You may even create a table and iterate over json's response, putting key in first td and value on the second. If you have one or two jsons will be nice. If you have a lot with delimited number of elements, create a line to each one.
It is not possible to use unescaped JSON in a data-attribute without breaking the html page.
However, it is possible to use escaped JSON that is more readable than the Rails/ERB default.
By default ERB escapes double quotes which makes JSON particularly hard to read. Instead
helper.rb
def my_html_escape(s) { '&' => '&', '>' => '>', '<' => '<', "'" => ''' }.each do |k,v| s = s.gsub(k,v) end s end
location.html.erb
<div data-
produces
<div data- | http://www.dlxedu.com/askdetail/3/d48f723387324ab43d8a9aa4e0dfc71f.html | CC-MAIN-2018-39 | refinedweb | 219 | 67.96 |
The Royal Academy Summer Exhibition: chronicle250.com
Extended Version
Matt McGrattan, Head of Digital Library Solutions, Digirati.
A shorter version of this article can be found here. fully searchable full text.
- Exhibitors should be identified in the catalogue text and linked back, via hotlinks on the images, to a searchable Index on the main Chronicle250 site.
- Index entries for a given Exhibitor should link to all occurences of that artist in the corpus of Exhibition catalogues.
- Pages for each year with rich scholarly articles, with tagging by topic.
-.
- Normalised exhibitor names so that artist names that might appear in multiple different forms throughout the corpus of catalogues are identified as the same person.
- Provided a usable search experience both within an individual catalogue and across catalogues.
- Created a usable index of Exhibitors.
- Brought the content — catalogues, indexes, scholarly articles — together following Strick and Williams’ design brief to create the Chronicle250 site.
This article outlines how we successfully solved some of these problems.
N.B. Code samples throughout are intended as simple illustrations of an approach, and are not taken from actual production code.
Catalogues and Illustrations as IIIF
If we had been starting.
For ingest of digitised content, the DLCS provides users with a Portal interface where they can manually upload images, and APIs for bulk upload and creation of IIIF manifests from source images. The DLCS automatically generates, stores, and delivers jpeg2000 images, and derivative thumbnail images for fast delivery of static thumbnails using the IIIF thumbnail service.
Using the DLCS APIs Digirati created basic manifests on the DLCS for each of the 250 catalogue years.
For an example manifest, shown in the Universal Viewer click here.
For the same manifest, shown in the Mirador viewer, illustrating the interoperability of IIIF Content click here.
Note, this is a bare manifest, with no metadata added, no OCR text, annotations, or search service.
Starting with basic IIIF Presentation and Image API services, Digirati then used OCR (optical character recognition) and named entity recognition to enrich this manifest with metadata, searchable full text, and annotations of text and exhibitors.
Illustrations as IIIF
All of the images of artworks used throughout the site are also provided as IIIF Image API images. For example click here to view:
Shows two works of art, each of which can be opened for high resolution deep zoom viewing using the IIIF APIs:
Using IIIF throughout made it easier to build a responsive site that worked at different resolution breakpoints while making use of the same image resources on the backend.
OCR
The DLCS can be provisioned, on a project by project basis, with a text pipeline, which uses a suite of micro-services to:
- create OCR text from a IIIF Image API source.
- normalise OCR to a standard common format (to ensure the DLCS is OCR-engine agnostic).
- provide OCR text as Open Annotation annotations (for display in IIIF Presentation API 2.x clients which do not support the W3C Web Annotation Data Model).
- do named entity recognition from controlled vocabularies, or from standard neural net models.
- store W3C and OA web annotations in an annotation server.
- index W3C and OA annotations alongside OCR text and provide IIIF Content Search API services.
For the Paul Mellon Centre we provisioned a custom version of this pipeline with specific features to improve the quality of output for the Royal Academy Exhibition catalogues.
Images in the catalogue are not always easily OCR-able:
In the image above, the text is skewed, there is bleed-through from the verso page, the text contains the long S (ſ), and the spacing/kerning of the typeface is very irregular.
Evaluating OCR options, Digirati tested:
- Tesseract 3.x and the LSTM based Tesseract 4
- Abby SDK
- Microsoft Azure Cognitive Services
- Google Vision
We tested for OCR accuracy, using a small sample of pages for which we created ground truth text, and for the number of named entities recognised in the text. Character accuracy was less important than success in identifying named entities, which is a factor of both character accuracy and image segmentation.
For example, comparing Azure and Google on one image:
{
"image": "",
"truth": [
"Arnesby Brown",
"Arnesby Brown",
"Arnold Gerstl",
"Barnard Lintott",
"Dod Procter",
"Edna Bahr",
"F. J. Sedgwick",
"Frank Eastman",
"George Harris",
"Glyn Philpot",
"Henry Bishop",
"James A. H. Hector",
"John Cole",
"John Simmons",
"John W. Schofield",
"Joseph Greenup",
"Julius Olsson",
"Kathleen M. Scale",
"Kenneth Green",
"L. Campbell Taylor",
"Laura Knight",
"Laura Knight",
"Marjorie Rodgers",
"Oliver Hall",
"Owen B. Reynolds",
"Philip Connard",
"Richard Einzig",
"Rowland Hilder",
"Stanhope A. Forbes",
"Stanley Grayson",
"Stanley Spencer",
"T. Leman Hare",
"Terrick Williams"
],
"google_missed": [
"Dod Procter",
"Rowland Hilder"
],
"google_extra": [],
"azure_missed": [
"Dod Procter",
"Frank Eastman",
"James A. H. Hector",
"Rowland Hilder",
"Terrick Williams"
],
"azure_extra": [
"James A.",
"Morning Haze",
"Pvecious Bane Rowland Ililder",
"Terriclc Williams"
],
"google_diff": 2,
"azure_diff": 9,
"overall best": "google",
"Accuracy (google)": 93.9,
"Accuracy (azure)": 72.7
}
Google Vision correctly recognises the long ſ (s), and although on some individual images Abby or Azure performed best, in general Google Vision was consistently good. Tesseract was not as accurate as any of the commercial cloud services.
Since the DLCS already has good support for Google Vision, and with no other OCR engine showing improved performance, OCR on the Royal Academy catalogues was done using Google Vision Document Text Detection via a DLCS service which:
- pulls jobs from the DLCS queue
- retrieves images using the IIIF Image API
- pushes images to Google using the Cloud Vision API
- retrieves Google Vision text output and normalises to a standard internal OCR format
this service also generates OA (Open Annotation) annotations via a proxy service and adds them to the IIIF Presentation API manifest so they are available in any standard IIIF Viewer.
Note that the kerning in historic typefaces is often very wide, which results in segmentation that identifies a single word as multiple words.
For example, in this image fragment, the OCR (as OA annotations) can be found here.
Thomas Banks is listed in the OCR as:
"TH O MAS B A N K S, "
Eight separate words, instead of two.
Generally, character accuracy, however, is high. There are only two or three wrong characters in the text for the above image.
Named Entity Extraction
The DLCS has a named entity recognition service which uses IIIF, Spacy.io and W3C Web Annotations to tag regions of images with people, places, dates, organisations, and other classes of entity.
The DLCS service is tightly integrated with the DLCS pipeline and with the IIIF APIs and has many DLCS specific features and enhancements. The examples below use simplified code that abstracts away many of these features for clarity.
Basic Approach
For the Royal Academy catalogues we tested Spacy.io’s built in neural models for entity recognition, in combination with DLCS text services for OCR.
DLCS Text Service
The DLCS can provide text for any IIIF Image as either a single block of text, or broken into lines.
For example, for:
The fulltext for the image can be found here.
#
And the same text can be found as lines here with bounding boxes provided for each line.
Basic Named Entity Recognition
Spacy.io’s built in named entity recognition is illustrated below.
N.B. the version used on the DLCS contains many IIIF-aware enhancements, integration with other DLCS services, and custom pipeline steps (some of which are illustrated in simplified form below) to produce higher quality output.
Spacy install:
pip install spacy
python -m spacy download en doc.ents:
print(ent.text, ent.label_)
Entities found:ు GPE
The Pictures,& ORG
Samaritan GPE
JOHN BAKER PERSON
R. A. PERSON
2 CARDINAL
MAS ORG
New GPE
Anchiſes PERSON
Troy GPE
CHRISTOPHER BARBER PERSON
Young Slaughter' PERSON
St. Martin' PERSON
5 CARDINAL
6 CARDINAL
BARRET ORG
R. A. Orchard- PERSON
7 CARDINAL
Grace PERSON
Dalkeith PERSON
one CARDINAL
4 CARDINAL
Spacy.io has identified some non person entities, such as
Troy as a geographical or political entity (GPE) along with cardinal numbers and other categories we are not interested in.
If we restrict to just person entities: [e for e in doc.ents if e.label_ == 'PERSON']:
print(ent.text, ent.label_)
We get:
JOHN BAKER PERSON
R. A. PERSON
Anchiſes PERSON
CHRISTOPHER BARBER PERSON
Young Slaughter' PERSON
St. Martin' PERSON
R. A. Orchard- PERSON
Grace PERSON
Dalkeith PERSON
We can see that the results are less than ideal. We have two of a possible five exhibitors, and we also have seven “persons” which are not exhibitors at all.
For a more modern catalogue page:
The results can be better:
Claire PERSON
Andrew PERSON
David Gammon- PERSON
John R. Merton PERSON
Glass Screen PERSON
John Hutton PERSON
Henry Bird PERSON
Percy Brown PERSON
Frederick G. Hughes PERSON
Frederick G. Hughes PERSON
Lipmann Kessel PERSON
M. C. PERSON
M. B. E. PERSON
F. R. C. PERSON
Hertha Köttner PERSON
Welsh Valley PERSON
Joan Williams PERSON
Peter Z. Nel PERSON
830 Cherubim PERSON
Malcolm A. Appleby PERSON
Josephine Pateman PERSON
Valerie E. Orchard 833
Pwllygranant PERSON
Broken Stone — collage PERSON
Doris M. Whitlock PERSON
John S. Hawley PERSON
Caroline C. Thornton PERSON
Valerie Thornton S PERSON
In this instance we have fourteen correct artists out of seventeen. Also, a number of false positives, that are correctly identified as people, but are subjects of works of art not exhibitors, and several ‘junk’ entries.
Spacy.io is generally quite accurate, see here, with a best performance on untrained named entity recognition tasks of around 85%, which compares favourably with other NER software. Also see, for example:
The Royal Academy catalogues are challenging, so the measured accuracy with named entity recognition not trained on this specific corpus, is much lower than 85%, perhaps as low as 50%, with many false matches.
By the Paul Mellon Centre’s own estimates there are approximately 256534 exhibits listed in the catalogues, and exhibitors will also appear in directory of exhibitors included in each catalogue. This gives an upper bound of as high as 513,068 potential token exhibitor names in the catalogues. The actual number, however, is lower, as often the same exhibitor appears more than once in a given year, but will only appear in the directory of exhibitors once in that catalogue. Nonetheless, this is a very high number of exhibitor names to capture, and beyond any reasonable number that could be tagged by humans in a short time scale.
To get anywhere close to this number, and to avoid a huge number of false positives, there were a number of things we did to improve the accuracy and quality of the output.
Controlled Vocabulary
One way to improve accuracy, both by eliminating false positives, and by seeding the system with known good entities is to use a source of controlled vocabulary.
We began the project with two possible sources of names for exhibitors, and added a third part way through.
Royal Academicians
Early on in the project, the Paul Mellon Centre were able to provide a list of Royal Academicians. This provided only a small subset of the artists who exhibited over the 250 years, but this was a useful source of names to use when validating the output of the named entity recognition.
Getty ULAN
Also early on in the project, we looked at Getty Union List of Artist Names (ULAN) as a source of artist data. Getty can provide their data as downloadable N-Quads, suitable for machine parsing.
We used the Getty ULAN data, along with the Academicians data, and a list of Exhibitors provided by the Paul Mellon Centre to create a union list of potential exhibitors.
Here is a sample of the data we were able to extract for one artist:
{
"lastname": "Abbey",
"firstname": "Edwin Austin",
"canonical": "Edwin Austin Abbey",
"profession": "Painter",
"gender": "",
"academician": true,
"address": "54, Bedford Gardens",
"years": [1885, 1890, 1894, 1896, 1897, 1898, 1899, 1900, 1901, 1902, 1903, 1904, 1906, 1910, 1912],
"matches": [{
"lastname": "Abbey",
"firstname": "Edwin Austin",
"ulan_id": 500010457,
"canonical": "Edwin Austin Abbey",
"academician": true,
"born": 1852,
"died": 1911,
"roles": ["artists", "illustrators", "muralists", "painters", "history artists"],
"biography": "",
"name_match": 1.0,
"data_source": "Getty ULAN",
"role_score": 100
}, {
"lastname": "Abbey",
"firstname": "Edwin Austin",
"ulan_id": 500010457,
"canonical": "Edwin Austin Abbey",
"born": 1852,
"died": 1911,
"name_match": 1.0,
"data_source": "Academicians"
}]
}
Matches shows the union of the Exhibitors data with Getty ULAN and the Exhibitors list.
Matching these sources of data was useful as it gave us date ranges when someone might be an exhibitor in order to rule out when a candidate found by named entity recognition, only coincidentally matched an Exhibitor. For example, if they had the same name but the artist in question died 50 years before that specific catalogue was published, or where there are more than one artist with the same name in the corpus.
Working with the ULAN data was challenging, as the size of the dataset, and the complexity of building a graph from the N-Quad data — scattered across multiple files — meant that it was computationally slow to build the list. Fortunately, once done, we could work with the data as lightweight JSON.
RA Exhibitors List
Part way through the project, the Paul Mellon Centre were able to provide a list of Exhibitors, from a digitised list of Exhibitors current up until around 1990.
This list was by far the most useful in identifying exhibitors as we were able to filter the named entities to just those artists appearing in the exhibitors list, which almost completely removed the issue of false positives and junk personal names from the set of tags we were able to generate. We worked with a union list of this data and the ULAN and Academicians data.
Name forms
One issue that we had to deal with is that personal names often — more commonly than not — appear throughout the catalogues in multiple formats.
For example:
J. Northcote, R.A. also appears in the index of the same volume as:
Northcote, James, R.A.
We need the custom matching that identified artists from the exhibitors list to be able to handle:
- Firstname Lastname format
- Lastname Firstname format
- Suffixes (such as R.A.)
- Prefixes (such as Sir or Dame)
- Forms with initials
- Forms with a mix of initials and full names
- Forms with full names only
And successfully identify them as the same named individual.
We wrote some open source code to generate, automatically, name formats from a provided name (ideally in some canonical form).
Personalnames can be installed using
pip.
from personalnames import names
import json name = "James Northcote, R.A." formats = names.name_initials(name=name,name_formats=["firstnamelastname", "lastnamefirstname"])print(json.dumps(sorted(formats), indent=4))
Will output:
[
"J. Northcote",
"J. Northcote, R.A.",
"James Northcote",
"James Northcote, R.A.",
"Northcote, J.",
"Northcote, J., R.A.",
"Northcote, James",
"Northcote, James , R.A."
]
So, given a list of exhibitors by the Paul Mellon Centre, it was possible to match, in the text, artists not just by the canonical form of their name, but also by the other potential forms they may take in the catalogues.
Whitespace/kerning/segmentation issues
As shown above, older typefaces, combined with heavily skewed text can make segmentation and whitespace unpredictable on pages, and an artist name might be missed in the OCR text even when the artist name is known.
If we take:
[ .
We can see that the names appear as:
In most cases there are intrusive whitespaces that would prevent named entity recognition, or pattern matching, from identifying artists.
Ignore whitespace
One successful approach we adopted was to strip all whitespace out of both the text and the possible artist names, to avoid intrusive whitespace preventing matching. The personalnames name formatting code can add a no whitespace version of the name variants to the set.
OCR text without whitespace:())
print(txt_no_ws)
Produces:
[4]GEORGEBARRET,R.A.Weſtbourn-green,nearPaddington.Aviewofagentleman'spark,takenfromthemanfion9.10Itscompanion,aviewofthemanfion-houſe,partofthepark,&c.fromtheoppoſitebanksofthelake.IAſtudyfromnature,inthemountainsofKeſwick,cscCumberland.JAMESBARRY,QueenAnn-ſtreet,Cavendiſh-ſquare.12Venusrifingfromtheſea.Vid.Lucretius,B.I.andHomer'sHymntoVenus.13.Medeamakingherincantationafterthemurderofherchildren.14.TheeducationofAchilles.FRANCESCOBARTOLOZZI,R.A.Broad-ſtreet,Carnaby-market.15AheadofaMadona:adrawing.E.BELK,Middleton'sBuildings.16Elevationandplan,foratempleinagarden.JOHN.BLACKBURNE,AtMr.Lipſcomb's,nearthePantheon,Oxford-ſtreet.Thetriumphofmercenarylove.18Theportraitsoftwochildren.ANNABLACKESLY,Greek-ſtreet,Soho.Portraitofagentleman.
We can do the same thing with a list of possible names (in the production site we are using the entire list of hundreds of thousands of artists):
names = [
"GEORGE BARRETT",
"JAMES BARRY",
"FRANCESCO BARTOLOZZI",
"JOHN BLACKBURNE",
"ANNA BLACKESLY",
]
no_ws_names = [(x, "".join(x.strip().split())) for x in names]
print(no_ws_names)
Which produces:
[
("GEORGE BARRETT", "GEORGEBARRETT"),
("JAMES BARRY", "JAMESBARRY"),
("FRANCESCO BARTOLOZZI", "FRANCESCOBARTOLOZZI"),
("JOHN BLACKBURNE", "JOHNBLACKBURNE"),
("ANNA BLACKESLY", "ANNABLACKESLY"),
]
Using these names to match against the text (using FlashText which implements the fast Aho-Corasick algorithm):
import json
from flashtext import KeywordProcessor())
names = [
"GEORGE BARRET",
"JAMES BARRY",
"FRANCESCO BARTOLOZZI",
"JOHN BLACKBURNE",
"ANNA BLACKESLY",
]
no_ws_names = [(x, "".join(x.strip().split())) for x in names]
keyword_processor = KeywordProcessor()
for x in no_ws_names:
keyword_processor.add_keyword(x[1], x[0])
keywords_found = keyword_processor.extract_keywords(txt_no_ws)
print(json.dumps(keywords_found, indent=4))
Returns:
[
"GEORGE BARRET",
"JAMES BARRY",
"FRANCESCO BARTOLOZZI",
"ANNA BLACKESLY"
]
Which successfully matches four of the five artists, irrespective of whitespace and segmentation issues, and misses the fifth because of an extra period/full stop that appears in John Blackburne’s name.
Ignoring periods/full stops would introduce far more errors; matching across sentence boundaries, for example, or punctuated lists, so four out of five is about as good as we can expect using this technique.
Lines and blocks
We also parsed the text both as blocks (as above) and as lines, which produces more ‘hits’ than parsing as blocks or lines alone. Sometimes the text that runs across a line boundary could create a false positive, for example, if the text was:
A landscape — James Smith
David Collins (a portrait) — Mary Jones
Then potentially the text, parsed only as a block, might mistakenly identify a (fictional) artist called ‘James Smith David’, and miss ‘James Smith’. Similarly parsing only as lines might miss artists, if the segmentation (caused by skewing or whitespace) has mistakenly broken a single line into more than one in the OCR.
The DLCS named entity recognition service was updated to use existing APIs provided by the DLCS service which produces the OCR in order to parse the text as both lines and blocks and merge the results to produce a more comprehensive set of exhibitors.
Dates
One extremely useful piece of information that can be extracted from the ULAN data, and from the PMC provided Exhibitors list is the dates in which an artist either lived and worked (ULAN) or exhibited (PMC).
Given 250 years of artists, it is common to find artists with the same name, for example, father and son, or just artists with the same first and last name.
Using the dates available from the data, we filtered the list of tags for a given year to just those artists who were either known to exhibit in that year, or who could have exhibited in that year.
We implemented filtering by date in the tagging engine, using the IIIF navDate in each IIIF Manifest to restrict the possibilities.
Putting it all together
If we take a sample image:
There are 21 possible artists that could be identified on this page.
- Parsing as blocks and as lines
- Parsing with and without whitespace
- Using a known list of exhibitors
- Using automatically generated name variants
- Filtering using the date of the catalogue
We identified:
Wilfred Fairclough (Painter,Engraver)
Sidney E. Huxtable (Painter)
James A. Woodford (Painter,Sculptor)
Robert F. Micklewright (Painter)
James Newton (Painter)
Gordon L. Davies (Painter)
John Doyle (Painter)
Peter H. Harman (Painter)
Hilda Chancellor Pope (Painter)
Donald Bosher (Painter)
Rosemary Allan (Painter)
Robert J. Swan (Painter)
Robert F. Micklewright (Painter)
Patrick D. Nairne (Painter)
Violet Fuller (Painter)
J. Humphrey Spender (Painter)
Noel G. Baguley (Painter)
That is, 17 of the possible 21 artists on the page, with no falsely identified artists, or an accuracy of ~81%. Furthermore, we identified these artists and their role/artist type, which was extracted from the Getty ULAN data and the list of exhibitors provided by the Paul Mellon Centre.
Using basic neural net named entity recognition, with none of the additional enhancements developed for this project, the list returned would be:
Patrick D. Nairne 234 Albi
Gordon L. Davies
Leonard G. Brammer
James Newton
Noel G. Baguley
Willow Trees
Robert Swan 240 Vines
Rocca
Pietra
Sidney E. Huxtable
John Doyle 242
Peter H. Harman
James Woodford
R. A.
Violet Fuller
Rosemary Allan 246
Robert F. Micklewright 247 Reed Clump- conté
Margaret Roroney
Robert F. Micklewright
Alice R. Boothby
Swanscombe Philip Carroll
That is, 10 correct artists, and 11 where the artist is either not an artist at all, or has been incorrectly identified.
The updated service offers a considerable improvement over basic named entity recognition. This model could be extended for different data sets, using different sources of controlled vocabulary, and we would expect to see similar improvements in overall accuracy.
Interestingly, the out-of-the-box natural language processing has picked up one artist missed by the more refined process. This suggests, that with some more time spent on the code which merges vanilla natural language processing and the custom processing pipeline, we could have slightly improved the overall score.
Rejected Options
While assessing our workflow, we considered and rejected a number of additional options.
Italics detection
For a relatively large set of catalogues, there is a consistent pattern, in the lists of exhibits, of using italics to identify artists. We considered using computer vision techniques to identify italics fonts, in order to restrict the text we extracted entities from to just italics text.
However, we rejected this as the investment in time did not look promising. In particular, artists do not appear in italics in earlier catalogues, and also consistently do not appear as italics in catalogues in the lists of committee members and of exhibitors that appear in the front and end matter of the volumes.
Page segmentation / splitting
Similarly, in a large set of catalogues, exhibitors typically appear at the right hand side of the page with one exhibit per line. We considered splitting the images, or weighting the results based on the position of the text on the page; something we know because the OCR service on the DLCS can return coordinates and character positions for text, and we know the overall dimensions of the image and the character count per line.
However, as with italics, this pattern does not apply in the front or end matter of the volumes, in some cases exhibits break across multiple lines, and in earlier volumes artists do no consistently appear at the right. A small test set of images was created, and this approach was tested, before being rejected as offering little improvement over parsing the entire text as a mixture of blocks and lines, and ignoring the relative position of candidate artists within the line or page.
Training neural nets
Spacy does provide APIs for training the neural net model. For other sources of vocabulary, where the entity class was a new class, not already part of the training model, it would potentially make sense to create ground truth data and train the model to create a new version of the engine that supported this new entity class.
In the case of the Royal Academy catalogues, we did not have a large enough set of truth data, and the entity class was an existing class, which we needed to refine or improve, rather than a new class of data. So adopting methods to filter and improve this existing set were more time effective than training a new model from scratch.
Implementation in the DLCS
The DLCS has a service (known internally Montague) which can parse the OCR text provided by the DLCS’s OCR service (known internally as Starsky), using Spacy to do named entity extraction (and if necessary parts of speech tagging to, say, restrict named entities to just noun phrases), and serialise the results as W3C Web Annotations which are posted to the DLCS annotation server (aka Elucidate).
Queues
The DLCS is an event driven system, consisting of many services of varying size which receive messages from a queue and act in response to those messages.
The DLCS named entity service watches for new textual content being advertised via messages in the queue from the DLCS OCR service. These messages contain the canvas and manifest
@id, which the DLCS named entity service dereferences to fetch IIIF Presentation API content, and then uses the DLCS OCR service API to fetch OCR text for canvases.
Working in this way makes it easy to add new services to the DLCS, or to push updated versions of existing services.
Pipelines
Spacy.io can implement extensions as pipeline steps. Our custom enhancements to the entity extraction process are implemented as Spacy.io pipeline components, for performance, and for tight integration with parts of speech tagging, and named entity extraction provided by Spacy’s built in neural net models.
We built pipeline components for:
- Ingesting custom vocabulary from JSON or CSV.
- Enriching the custom vocabulary with variant forms (as per the personal name formats described above).
- Using the Aho-Corasick algorithm to do fast pattern matching of text with custom vocabulary.
- Filtering entities by type, e.g. to just Person tags, or just Person and Date tags.
- Filtering entities by date
- Interacting with Starsky to fetch XYWH bounding boxes for entities on IIIF Image API images.
- Combining neural net based entities with pattern matching entities.
- Ignoring or removing Stopwords from entities.
We also modified existing code for serialising entities as W3C Web Annotations to create editable versions of those annotations which could be understood by the Annotation Studio, which could be used by Paul Mellon Centre staff to correct machine generated tags, or create new tags for missed exhibitors.
Annotation Studio
We built an editing interface for Paul Mellon Centre staff to update content, if required. This editing interface used the Annotation Studio and a bespoke serialisation of the extracted artist data to render editable annotations in the format expected by the Annotation Studio.
The editable annotations were then transformed via a proxy to create the simple highlighting annotations that are made available to the viewer.
{
"@id": "",
"@type": "oa:Annotation",
"motivation": "oa:linking",
"on": "",
"resource": {
"@id": "",
"label": "Bacon, John (Sculptor) Exhib. 1769-1799"
}
} fulfill; Montague: | https://medium.com/digirati-ch/the-royal-academy-summer-exhibition-chronicle250-com-dfc356a1ce8f | CC-MAIN-2019-35 | refinedweb | 4,386 | 51.28 |
How Do Two Edison Boards Exchange Data Using Python?rana.helal Aug 22, 2015 7:25 AM
Dear All;
I have two edison boards and I would like to know what is the easiest way to connect them together and let them exchange data using python code. Is this possible using bluetooth?
Best Regards;
Rana Helal
1. Re: How Do Two Edison Boards Exchange Data Using Python?Intel_Peter Aug 24, 2015 10:09 AM (in response to rana.helal)
Hello rana.helal,
You could try using a python library like pybluez. In order to install it you must first install setuptools, a dependency. To do so run the following line:
wget -O - | python
Now download the package from karulis/pybluez · GitHub and decompress it on your PC. Once that has finished copy the folder into your Edison. Now run the following commands:
cd pybluez-master python setup.py install
Now the install process should start. Once it’s finished, you can run the following example to make an inquiry and test that the library is actually working.
from subprocess import call import bluetooth print("unblocking bluetooth...") call(["rfkill", "unblock", "bluetooth"]) call(["sleep", "1"]) print("performing inquiry...") nearby_devices = bluetooth.discover_devices( duration=8, lookup_names=True, flush_cache=True, lookup_class=False) print("found %d devices" % len(nearby_devices)) for addr, name in nearby_devices: try: print(" %s - %s" % (addr, name)) except UnicodeEncodeError: print(" %s - %s" % (addr, name.encode('utf-8', 'replace')))
This is a slightly modified version of the example found in pybluez/inquiry.py at master · karulis/pybluez · GitHub it was only changed to add the lines that unblock Bluetooth.
You might also find some useful information in this book: An Introduction to Bluetooth Programming, I suggest you to read it.
Peter.
2. Re: How Do Two Edison Boards Exchange Data Using Python?rana.helal Aug 25, 2015 2:42 PM (in response to Intel_Peter)
Thank you Intel_Peter for your answer.
I found another link that could be useful as well. It uses pyserial and is quite simple to follow. Edison to Edison Bluetooth | Musings from Stephanie.
Best Regards;
Rana Helal
3. Re: How Do Two Edison Boards Exchange Data Using Python?Intel_Peter Aug 25, 2015 2:51 PM (in response to rana.helal)
That's great to hear rana.helal, I'm glad that you were able to find a way to continue with your project. If you have any other doubt, don't hesitate to come back to the community.
Peter. | https://communities.intel.com/thread/79374 | CC-MAIN-2018-13 | refinedweb | 408 | 67.86 |
The objective of this blog is to show an example of how to add custom fields and additional logic to the guided move-in process. The logic added will include evaluation of fields added by the KUT (Key User Tools), so we will also see how to use those fields in the SDK.
Any coding or configuration examples provided in this document are only examples and are NOT intended for use in a productive system. The example is only done to better explain and visualize the topic.
With the 1702 release of Utility industry extension of C4C the guided process for move-in, move-out and transfer was introduced. While still being a web service call to IS-U, this is implemented using a business object (BO) in C4C as the ‘reference’. The name of the business objects is: UtilitiesActionBO. It is the same BO used for all of the processes and no data is materialized in C4C.
The technical benefit of using a BO, is that several of the standard SDK options becomes available, even though it still ends up as a web service call to IS-U.
In the following the examples will be based on the move-in.
Below is a short visual of the process. In short the 4 steps are:
- Selecting the premise, customer for and move-in date.
- Selecting what services that should be activated for the customer
- Selecting security deposit and possible mailing address if different from move-in address
- Review of all information entered for the prior steps
Two fields have already been added using the KUT:
- A field for credit check required
- Field to enter Social Security Number when creating a new customer
Below is the end result we are looking for – the ‘Credit Check Required’ is flagged.
We achieve this in 3 easy steps:
- Create a solution in PDI
- Make fields created by the KUT available for the PDI
- Write logic to control the field created by the KUT
Creating a solution in PDI
1.1. Login to the C4C system as PDI developer in the SDK studio. Create a new solution
2. Add new item to solution “GAF Custom1”
Make fields created by the KUT available for the PDI
2.1. Right Click on the solution GAFCustom1 and Add New Item to include CustomerObjectReferences.
( Hint: CustomerObjectReferences is essential to access key user extension fields in the PDI custom logic )
2.2 Select Extension –> Reference to Customer Specific Fields and add CustomerObjectReference to the solution
2.3. Select key user fields which needs to be referenced in the SDK implementation.
In this example: Four fields in the Business Object UtilitiesActionBO is selected. We are only using one of the fields in this example, but these four are the ones created by the KUT.
Hint: Look at the node name – when we add this to the BO, we need to know where it belongs. In this case where we are adding the ‘Credit Check Required’ this needs to be added to the ‘Contract’ node. We will see this later.
2.4. Save and Activate the CustomerObjectReference in the solution.
Write logic to control the field created by the KUT
3.1 Add a new Item to the solution –> Business Object Extension ( ActionBOExtension.xbo)
3.2. Select namespace and BO UtilitiesActionBO to extend the Business Object for Move In/MoveOut/Transfer GAF
3.3 Save and Activate the BO extension. In this BO specific for Move In /MoveOut/Transfer Guided Process, the following nodes are extensible ( Contracts, Contract Account Details, Installation and Customer ) . New PDI extension fields, custom Bo action can be defined here.
3.4. Right Click on the ActionBOExtension.xbo and Create Script Files
3.5 Selecting Trigger Points
There are different trigger points to write the SDK logic. In this example , we will use the Event “After Modify” for Contracts Node to set the Credit Check Required field to true.
( Hint: Please note to uncheck the mass enable check box for flat structure . If mass enable is checked then the input to the script files is a table and get row and get count needs to be used for accessing the data of every row. This can be avoided for flat data structures )
.
3.7. Launch the Move In Floorplan to see the results. Credit Check Required will be defaulted on Launch
| https://blogs.sap.com/2017/03/08/project-extensions-for-the-guided-move-inout-and-transfer/ | CC-MAIN-2018-43 | refinedweb | 723 | 63.09 |
Kdb+ vs. Python
What is kdb+?
Kdb+ is a powerful column based time series database. It is commonly used in investment banks and hedge funds around the world for extremely fast time series analysis.
Kdb+ uses vector language Q, which itself was built for speed and expressiveness. Q commands can be entered straight into the console, as compilation is not needed. The terms Q and Kdb+ are usually used interchangeably.
What makes kdb+ fast is that is column-oriented, and in-memory. Column-oriented means each column of data is stored sequentially. With regards to time series data, analysis is performed on columns e.g. coding up a function that can find the average price from certain trades. Row-oriented databases are much slower at this as they would need to read across each record one by one. In memory means the data is stored in RAM. This has been made possible with technological advancements in servers.
In this blog post I will time a linear regression in q and compare its performance to the same regression coded in Python. Each step and parameters in the linear regressions are matched for a fair comparison between Python and Q.
Installing Kdb+
To follow along, you can install the 32 bit version of kdb+. This version is free to use and can be set up in just a few steps. Note I am using Mac OSX.
- Download kdb+ at
- Copy the q folder from Downloads into the home directory.
- In the terminal type
vi ~/.profile
- Use vi to add the following code to your .profile file, which can be found in your home directory. Type :wq to save and exit the file after the code.
export QHome=~/q export PATH=$PATH:$QHOME/m32
- Use the source command to load these variables into your current session (you only have to do this once).
Source ~/.profile
- Type q to enter a q session. Type \\ to exit.
Linear Regression in kdb+
In the example script below, we generate 1,200,300 random data records for the table. The table t is initialised with floating type integers, and x1, x2, x3 parameters as columns. We update the table t by appending the column y. The ys in this column are computed from the regression equation y = 1 + (2*x1) + (3*x2) + (4*x3). The independent (x) and dependent (y) variables are assigned to ind and dep. Finally we compute out the regression parameters. enlist returns arguments in a list.
n:1200300; t:([]x1:n?1f;x2:n?1f;x3:n?1f) update y:1+(2*x1)+(3*x2)+(4*x3) from `t; indx:`x1`x2`x3 depy:enlist`y X:enlist[count[t]#1f],t indx inv[X mmu flip X] mmu X mmu flip t depy
The following code times code execution in q. The regression took only 489 milliseconds to execute. And as expected, x1, x2 and x3 (the “coefficients” of the regression) are 1, 2, 3 and 4.
Linear Regression in Python
The equivalent of the above example in Python:
from numpy import random from sklearn import linear_model n = 1200300 X = random.random((n,3)) y = 1+2*X[:,0]+3*X[:,1]+4*X[:,2] y = y[:,None] reg = linear_model.linearRegression() reg.fit(X,y) print(reg.intercept_,reg.coef_)
Let’s break the python script down to see how it replicates the q code.
We first generate a 2D array with random values between 0 and 1. Like the table we generated in q, it has 1200300 rows and 3 columns.
Next we calculate values for y. Like the example in q, we compute y by multiplying 2,3,4 with X’s columns respectively. X[:,0] extracts the first column, X[:,1] extracts the second column, etc. We do need to reshape it with y[:,None] because of the dimension requirements by sklearn. (target y must be a numpy array of shape [n_samples, n_targets])
For those not familiar with y[:,None] it is the same as using the reshape function like so.
We then initialise the linear regression model, fit to our data and print the intercept and regression coefficients.
reg = linear_model.linearRegression() reg.fit(X,y) linearRegression(copy_I=True, fit_intercept=True, n_jobs=1, normalize=False) print(reg.intercept_,reg.coef_)
The output is [1.][[2. 3. 4.]]
Running time python linreg.py gives us an execution time of 1.163s.
You can read more about what real, user and sys mean here
Kdb+ is the clear winner when it comes to speed of Big Data analysis. For those not yet willing to give up Python completely, you will be glad to hear there exists a Python & q interface, PyQ. PyQ provides seamless integration of Python and q code. I will discuss this in future kdb+ posts. | http://aaaquants.com/2017/10/12/kdb-vs-python/ | CC-MAIN-2018-39 | refinedweb | 797 | 66.84 |
Long.
First thing’s first. Let’s get it installed. To do this, I’ll direct you to my ol’ faithful Chocolatey to make this happen with extreme ease. If you haven’t installed Chocolatey yet, go ahead and do that – I’ll wait the usual 10 seconds.
Once you’ve got Chocolatey installed, getting ScriptCS is as easy as:
cinst scriptcs
and voila! Now you’re all set. If you paid attention to what Chocolatey output while it was doing the install, you’ll see why we have Roslyn to thank for this beauty of a tool:
If you visit the ScriptCS website, a lot of what they’ll show you is how to write ScriptCS files using your favorite text editor, then executing them at the command line. That’s all well and good but honestly I don’t see how that’s any different than firing up VS or VS Express, writing a CS file, and cracking it off right within the IDE.
What’s valuable to me is firing up a command prompt and writing C# to test something. That quick. I work on an SDK at work and sometimes I want to just quickly verify that the SDK is doing what I expect. To this end, let’s see how easy ScriptCS makes this.
Let’s do a simple Hello World application without running any application other than our Command Prompt.
Boom, you’re sitting in the ScriptCS C# interpreter. Code away!
Console.WriteLine(“Hello, World!”);
And profit.
How hot is that? The beauty of this is imagine you have written a DLL or some other reusable component and wanted to check out how the changes you made work really quickly and easily. Adding and using references, even nuget ones, can also be done right w/in the ScriptCS interpreter.
I created one Class Library project with this as its contents:
using System;
namespace ClassLibrary1
{
public class Class1
{
public void PrintNumbers()
{
for (int i = 0; i < 10; i++)
{
Console.WriteLine("#{0}", i);
}
}
}
}
Easy enough, build in to ClassLibrary1.dll, copy to c:\users\brandon and off I go:
Quickly and easily, I just prototyped my library and played around a bit.
I remember early versions of ScriptCS and it didn’t used to allow multi-line REPL. After some back and forth with Glenn Block (one of the founders) and a couple of the other team members, it was brought to my attention that indeed it does now!
So there you have it! Quick, easy, C# with nothing more than a command line! Be sure to check out the ScriptCS website and Github project area to learn more about what you can do with it and how to. | http://www.codeproject.com/Articles/763741/Prototyping-with-Csharp-Thanks-Roslyn | CC-MAIN-2015-11 | refinedweb | 453 | 78.79 |
NAME
ng_tee - netgraph ‘‘tee’’ node type
SYNOPSIS
#include <sys/types.h> #include <netgraph/ng_tee.h>
DESCRIPTION
The tee node type has a purpose similar to the tee(1) command. Tee nodes are useful for debugging or “snooping” on a connection between two netgraph nodes. Tee nodes have four hooks, right, left, right2left, and left2right. All data received on right is sent unmodified to both hooks left and right2left. Similarly, all data received on left is sent unmodified to both right and left2right. Packets may also be received on right2left and left2right; if so, they are forwarded unchanged out hooks right and left, respectively.
HOOKS
This node type supports the following hooks: right The connection to the node on the right. left The connection to the node on the left. right2left Tap for right to left traffic. left2right Tap for left to right traffic.
CONTROL MESSAGES
This node type supports the generic control messages, plus the following. NGM_TEE_GET_STATS Get statistics, returned as a struct ng_tee_stats. NGM_TEE_CLR_STATS Clear statistics.
SHUTDOWN
This node shuts down upon receipt of an NGM_SHUTDOWN control message, or when all hooks have been disconnected. If both right and left hooks are present, node removes itself from the chain gently, connecting right and left together.
SEE ALSO
tee(1), netgraph(4), ngctl(8)
HISTORY
The ng_tee node type was implemented in FreeBSD 4.0.
AUTHORS
Julian Elischer 〈julian@FreeBSD.org〉 | http://manpages.ubuntu.com/manpages/maverick/man4/ng_tee.4freebsd.html | CC-MAIN-2014-49 | refinedweb | 231 | 68.16 |
I'm trying to calculate the difference between two dates in "weeks of year". I can get the datetime object and get the days etc but not week numbers. I can't, of course, subtract dates because weekends can't be ensured with that.
I tried getting the week number using
d1.isocalendar()[1]
d2.isocalendar()[1]
isocalendar()[1]
December 31, 2012
def week_no(self):
ents = self.course.courselogentry_set.all().order_by('lecture_date')
l_no = 1
for e in ents:
if l_no == 1:
starting_week_of_year = e.lecture_date.isocalendar()[1] # get week of year
initial_year = e.lecture_date.year
if e == self:
this_year = e.lecture_date.year
offset_week = (this_year - initial_year) * 52
w_no = e.lecture_date.isocalendar()[1] - starting_week_of_year + 1 + offset_week
break
l_no += 1
return w_no
How about calculating the difference in weeks between the Mondays within weeks of respective dates? In the following code,
monday1 is the Monday on or before
d1 (the same week):
from datetime import datetime, timedelta monday1 = (d1 - timedelta(days=d1.weekday())) monday2 = (d2 - timedelta(days=d2.weekday())) print 'Weeks:', (monday2 - monday1).days / 7
Returns 0 if both dates fall withing one week, 1 if on two consecutive weeks, etc. | https://codedump.io/share/B0MYpxx1qfCq/1/how-to-calculate-difference-between-two-dates-in-weeks-in-python | CC-MAIN-2021-39 | refinedweb | 187 | 60.21 |
Quick Python Diff
If you need a quick, cross-platform diff between files that makes a nice HTML document for your perusal, Python has your back:
import difflib first_name = 'firstname.txt' second_name = 'secondname.txt' diff_name = 'diff.html' with open(first_name, 'r', encoding='utf_8') as first: fromlines = first.readlines() with open(second_name, 'r', encoding='utf_8') as second: tolines = second.readlines() with open(diff_name, 'w', encoding='utf_8') as output: output.write(difflib.HtmlDiff().make_file(fromlines, tolines, first_name, second_name))
If you need more functionality, check out
vimdiff, a more featureful version of this script at the python docs, and the linux tools
diff and
sdiff. | https://www.bbkane.com/blog/quick-python-diff/ | CC-MAIN-2021-43 | refinedweb | 102 | 50.73 |
At 20 years old, Java remains the world’s most popular programming language this August, with nearly 10 million active developers. If accurate, this figure puts Java slightly ahead of Swedish. While only slightly easier to read, Java has a number of compelling features, including lambdas and streams, intersection types, multiple inheritance, and unrivaled tools and frameworks. If you haven’t seen Java lately, you haven’t seen what Java can do. Join us each month, and stretch your idea of modern Java development.
Java
When you think of Java development, you may be reminded of heavy libraries and runtimes, but this was a shortcoming of Java’s early class loading mechanism, which added a lot of unnecessary burden. Prior to modules, if you wanted to use a single package in a JAR, all of its dependencies became your dependencies – and their dependencies too. There was no way to establish the exact runtime requirements without downloading the internet or splitting yet-another-JAR. Throw in version conflicts, namespace collisions and soon you’ve got dependency hell.
With the introduction of Compact Profiles in Java 8, and the planned rollout of Project Jigsaw in Java 9, all this should soon become a thing of the past. Jigsaw introduces something called the Java Module System, a more granular alternative to the JAR format that provides metadata for the class loader to determine which parts of the dependency graph are actually needed at runtime. The JMS promises to deliver a number of popular features including a local repository for storing and retrieving modules on the same machine, Linux-oriented package management and support for installation and removal (a la RubyGems and NPM). If done well, this could have a huge benefit to the Java ecosystem.
Project Jigsaw is really coming in Java 9, and none too soon, for a project in development over the last six years. Given the scope of these changes and their intended timeline, JDK 9 must work carefully to balance repaying technical debt with feature enhancements, while maintaining two decades of backwards compatibility. Jigsaw will help ensure that Java 9 remains a versatile platform for future applications by introducing strategic changes to the Java runtime and dependency management. To stay up to date with the latest changes, subscribe to their mailing list.
JShell and REPL in Java 9 – This is a feature not traditionally associated with Java development, a Read–eval–print loop. In the spirit of command line friendliness, JShell provides a way to quickly evaluate and test Java statements with just a terminal. The JDK will offer this same functionality to IDEs and developer tools through the JShell API, making it easier to do certain things in IntelliJ IDEA, like execute Java scratch files.
JEP 259: Stack-Walking API is a new candidate for Java 9, enabling lazy stream-based access to Java stack traces. To date, there are 51 Java Enhancement proposals targeting JDK 9, and three candidates for acceptance, including JEP 225: Javadoc Search which will add a keyword search for Javadocs, and JEP 238: Multi-Release JAR Files, which will extend the JAR format to allow multiple class files compiled on different Java versions to occupy a single JAR. In the midst of JAR overhauling, it is not yet clear how this proposal will fit into the new JMS.
7 Days with Java 8 – With Java’s adoption at an all-time high, there’s never been a better time to start using Java 8 in production. A key feature in Java 8 is direct support for functional style programming, with the help of lambdas, streams and functional primitives like
map(...) and
reduce(...). These are designed to work well with existing OOP patterns, helping you to reduce boilerplate and improve readability. With a little help from Shekhar Gulati, you’ll be using lambdas in no time flat.
Kotlin
One of the most unique and ambitious projects we’ve tackled at JetBrains is a language called Kotlin. With Kotlin, you have the chance to see a language being designed from the inside out, interact with our developers, and contribute to its future. We’re excited to start using Kotlin in production, and look forward to hearing your feedback as we prepare for its release. Give Kotlin a try today and let us know what you think!
Modifiers vs. Annotations – One of the early language goals of Kotlin was to reduce boilerplate, and by taking what we’ve learned building parsers at JetBrains, we’ve been able to make a lot of progress on that front. For example, trailing semicolons are optional in Kotlin. Getters and setters are optional. Types are usually optional, if they can be inferred (ex. variable and return types). But sometimes a single character can make a huge difference in tooling efficiency, which is why we’ve decided to sacrifice unified metadata to achieve it. We’re interested in hearing your feedback.
Android + Kotlin = <3 – If you’re an Android developer interested in learning Kotlin, you’ll be pleased to know that Kotlin is totally compatible with Android. Since Kotlin compiles down to Java bytecode, you’ll be able to use the same familiar patterns and APIs that you’ve always tolerated, but never really loved. Get ready to fall back in love with Android development with true lambdas, built-in null safety, extension functions, and Anko, a library we’ve created to help you develop Android applications with pleasure. Thanks to Michael Sattler for his support!
Building APIs on the JVM Using Kotlin and Spark – Another area where Kotlin has seen significant adoption is web services. One of Kotlin’s early targets was JavaScript, and we’ve designed the language to work seamlessly in static and dynamically typed environments like JavaScript and NodeJS. If you’re determined to build web applications on the JVM, Spark (not to be confused with Apache Spark) is a tiny web framework for doing just that, and Travis Spencer will show you how to build APIs using Kotlin and Spark.
Spring
Frameworks are the bread and butter of an enterprise Java application, and learning a framework in Java is a bit like learning another language. Choosing a new framework for your business-critical application is not a choice to be made lightly, and one you will need to live with for years to come. In the beginning, there were two schools: Spring and Java EE. Life was simple. Now, there is Spring, Vaadin, GWT, Grails, Akka, Play – never mind reactive or actor frameworks. So each month we’ll focus on a new framework and see where that gets us.
Spring began its life as a dependency injection and AOP framework, later expanding into a number of related areas like security, data access, mobile and distributed computing. Traditionally considered a heavyweight choice, Spring evolved into a number of smaller, more agile projects, with significantly improved tooling in recent years. It also absorbed and maintains convenient abstractions for technologies, like ZooKeeper and Netflix OSS for doing things like service discovery with Spring Cloud, and composing various technologies with Spring Boot. But at its core is the same dependency injection framework you have grown to know and love.
Spring Framework 4.2 GA is the latest version of the Spring Framework, proper. It consists of several loosely related modules for dependency injection and aspect oriented programming. Spring Framework 4.2 includes a new annotation-based event listener, refinements to Java 8 support, integration with Hibernate 5.0, data binding for JSR-354: Money & Currency, support for CORS and much more. See here for a full list of changes in Spring Framework 4.2 and to learn more about Spring Framework, you may find the reference documentation surprisingly accessible.
Microservices with Spring are a more recent addition, designed to alleviate the complexity of monolithic applications, and it is only fitting that Spring should now have frameworks for building microservices (itself following a similar approach). Spring Boot is Spring’s lightweight solution for building modern web applications. Together with Spring Cloud, another framework for building distributed systems, you can bootstrap a distributed microservice architecture in days, rather than months. Or you could use Ratpack with Spring Boot for a more reactive kind of microservice.
Coming up in 2016: Spring Framework 4.3 & 5.0 – A somewhat unique feature of Spring Framework 4.0 was its compatibility with Java 6-8, revamping the APIs to use library features like Stream and Optional where available, and shipping vital updates to legacy Java users on the core framework. With their 4.* release cycle nearing completion, the next big step for Spring will be requiring Java 8 exclusively. Spring Framework 5.0 will bring all new support Java 9, and notably JEP 110: HTTP/2 Client. You can follow their issue tracker for the latest updates.
Android
Java and Android share a language, but that’s mostly where the comparison stops. Despite their initial resemblance, writing Java for the JVM is very different to Android development, which has its own lifecycle, usage guidelines and memory constraints. It is interesting how Android has evolved independently of the Java ecosystem, yet it represents Java’s only significant presence on consumer mobile devices. Likewise, you might wonder if that presence were merely incidental, as though it were just waiting for a suitable candidate to take its place.
Android Studio 1.3 Released – This is the one you’ve been waiting for. With fully featured C/C++ editing, now you can dig into Android NDK development with the support of powerful debugging and profiling tools to help diagnose bugs. Android Studio 1.3 ships a number of highly anticipated features including a heap dump viewer, allocation tracker, separate modules for APK tests, new annotations and inspections, and brand new support for data binding, a feature that lets you bind references to variables and handler methods directly inside the Layout XML.
Android Databinding: Goodbye presenter, hello ViewModel – Traditionally, Android has left the implementation of UI patterns up to developers. This has created some debate over Model-View-Controllers, Model-View-Presenters, Model-View-ViewModels and other three letter acronyms. The introduction of Android databinding removes the necessity of manually updating Views, and can lead to much simpler solutions. As Frode Nilsen mentions, Android’s approach to databinding in XML is a somewhat contentious technique, and is still far from feature-complete.
Exploring the new Android Permissions Model – Android M introduces granular permissions, a long-awaited feature for users. This change will have implications for permission-hungry applications, and in order to effectively utilize the new permissions model, Android applications must be able to handle missing permissions and gracefully recover from losing them at runtime. Ultimately, this makes the process a lot more complex, but as our luck would have it, Joe Birch breaks down the changes and shares a number of best practices for getting all the right permissions.
RxAndroid 1.0 – Reactive has quickly found applications in many programming domains in the span of a few years. If you’re wondering what Reactive is all about, Ben Christensen at Netflix has a great introduction on applying reactive programming to existing applications, and many of the same techniques apply to data centers on the cloud and mobile devices in your pocket. You can use plain RxJava on Android, but RxAndroid provides a few convenience classes to help simplify scheduling on Android, and is now stable.
Develop with pleasure!
| https://blog.jetbrains.com/idea/2015/08/java-annotated-monthly-august-2015/ | CC-MAIN-2016-50 | refinedweb | 1,908 | 51.78 |
Offline mode lets Screenlets function without a network connection. For offline mode to work with your Screenlet, you must manually add support for it. Fortunately, Liferay Screens 2.0 introduced a simpler way of implementing offline mode support in Android Screenlets:
- Update your Screenlet’s classes to leverage the offline mode cache
- Create an event class (if your Screenlet doesn’t already have one)
Implementing these steps, however, differs somewhat depending on how your Screenlet communicates with the server:
- A write Screenlet: writes data to a server. The Add Bookmark Screenlet created in the basic Screenlet creation tutorial is a good example of a simple write Screenlet. It asks the user to enter a URL and a title, which it then sends to the Bookmarks portlet in Liferay DXP to create a bookmark.
- A read Screenlet: reads data from a server. The Web Content Display Screenlet included with Liferay Screens is a good example of a read Screenlet. It retrieves web content from Liferay DXP for display in an Android app. Click here to see Web Content Display Screenlet’s documentation.
This tutorial shows you how to add offline mode support to both kinds of Screenlets. You’ll start with write Screenlets, using Add Bookmark Screenlet as an example. Before getting started, be sure to read the basic Screenlet creation tutorial to familiarize yourself with Add Bookmark Screenlet’s code. You’ll conclude by learning how offline mode implementation in read Screenlets differs from that of write Screenlets.
Adding Offline Mode Support to Write Screenlets
To add offline mode support to write Screenlets, you’ll follow these steps:
- Create or update the event class.
- Update the listener interface.
- Update the Interactor class.
- Update the Screenlet class.
- Sync the cache with the server.
Each of the sections that follow detail one of these steps. You’ll begin by creating or updating the event class.
Create or Update the Event Class
Recall from the basic Screenlet creation tutorial that an event class is
required to handle communication between Screenlet components. Also recall that
many Screenlets can use the event class included with Screens,
BasicEvent, as
their event class. For offline mode to work, however, you must create an event
class that extends
CacheEvent
to see
CacheEvent). Your event class has one primary responsibility: store and
provide access to the arguments passed to the Interactor. To accomplish this,
your event class should do these things:
- Extend
CacheEvent. For the arguments, define variables and public getter methods.
- Define a no-argument constructor that only calls the corresponding superclass constructor.
- Define a constructor that sets the Interactor’s arguments.
In the case of Add Bookmark Screenlet, the arguments are the bookmark’s URL,
folder ID, and title. For example, here’s the full code for this Screenlet’s
event class,
BookmarkEvent:
public class BookmarkEvent extends CacheEvent { private String url; private String title; private long folderId; public BookmarkEvent() { super(); } public BookmarkEvent(String url, String title, long folderId) { this.url = url; this.title = title; this.folderId = folderId; } public String getURL() { return url; } public String getTitle() { return title; } public long getFolderId() { return folderId; } }
Next, you’ll update the listener.
Update the Listener
Recall from the basic Screenlet creation tutorial that the listener interface
defines a success method and a failure method. This lets implementing classes
respond to the server call’s success or failure. Listeners that support offline
mode offer the same functionality, although differently. Offline mode listeners
must extend
BaseCacheListener, which defines only this
error method:
void error(Exception e, String userAction);
By extending
BaseCacheListener, your listener no longer needs an explicit
failure method because it inherits the
error method instead. This
error
method also includes an argument for the user action that triggered the
exception.
You can therefore update your listener to support offline mode by extending
BaseCacheListener and deleting the failure method. For example, here’s Add
Bookmark Screenlet’s listener,
AddBookmarkListener, after being updated to
support offline mode:
public interface AddBookmarkListener extends BaseCacheListener { onAddBookmarkSuccess(); }
Note that you must also remove any failure method implementations (such as in an
activity or fragment that implements the listener), and replace any failure
method calls with
error method calls. You’ll do the latter next when updating
the Interactor class.
Update the Interactor Class
Recall from the basic Screenlet creation tutorial that Interactor classes extend
BaseRemoteInteractor with the listener and event as type arguments. To support
offline mode, your Interactor class must instead extend one of the following
classes. Which one depends on whether your Interactor writes data to or reads
data from a server:
BaseCacheWriteInteractor: writes data to a server. Extend this class if your Screenlet is a write Screenlet. Click here to see this class.
BaseCacheReadInteractor: reads data from a server. Extend this class if your Screenlet is a read Screenlet. Click here to see this class.
In either case, the type arguments are the same: the listener and the event.
Note, however, that the event must extend
CacheEvent as described above. For
example, since Add Bookmark Screenlet is a write Screenlet, to support offline
mode its Interactor class must extend
BaseCacheWriteInteractor with
AddBookmarkListener and
AddBookmarkEvent as type arguments:
public class AddBookmarkInteractor extends BaseCacheWriteInteractor<AddBookmarkListener, BookmarkEvent> {...
You must also make a few changes to the Interactor class’s code. The main change
is that the
execute method now takes the event instead of var args. You can
then retrieve the data you need from the event. For example, to support offline
mode, the
execute method in
AddBookmarkInteractor takes
BookmarkEvent as
an argument. The bookmark’s URL, title, and folder ID are then retrieved from
the event for use in the
getJSONObject method that makes the server call. The
execute method finishes by setting the resulting
JSONObject to the event,
and then returning the event:
@Override public BookmarkEvent execute(BookmarkEvent bookmarkEvent) throws Exception { validate(bookmarkEvent.getUrl(), bookmarkEvent.getFolderId()); JSONObject jsonObject = getJSONObject(bookmarkEvent.getUrl(), bookmarkEvent.getTitle(), bookmarkEvent.getFolderId()); bookmarkEvent.setJSONObject(jsonObject); return bookmarkEvent; }
You should also change the
onSuccess method to take an instance of your event
class instead of
BasicEvent. This is the only change you need to make to this
method. For example, the
onSuccess method in
AddBookmarkInteractor supports
offline mode by taking a
BookmarkEvent instead of a
BasicEvent:
@Override public void onSuccess(BookmarkEvent event) { getListener().onAddBookmarkSuccess(); }
Now make the same change to the
onFailure method, but replace the listener’s
failure method call with a call to the
error method inherited from
BaseCacheListener (see the listener section above for an explanation of this
method). For the
error method’s arguments, you can retrieve the exception from
the event and define a string to use as the user action. For example, to
support offline mode the
onFailure method in
AddBookmarkInteractor takes a
BookmarkEvent instead of a
BasicEvent. Also, the method’s
error call
defines the “ADD_BOOKMARK” string to indicate that the error occurred while
trying to add a bookmark to the server:
@Override public void onFailure(BookmarkEvent event) { getListener().error(event.getException(), "ADD_BOOKMARK"); }
Update the Screenlet Class
Updating the Screenlet class for offline mode is straightforward. In the
Screenlet class’s
onUserAction method, you’ll change the call to the
Interactor’s
start method so that it takes only an event as an argument.
Before doing this, however, you should create an event instance and set its
cache key. A cache key is a value that identifies an entity in the local cache.
This lets you retrieve the entity from the cache for later synchronization with
the server.
In Add Bookmark Screenlet, for example, a bookmark’s URL makes a good cache key.
To support offline mode, the
onUserAction method in
AddBookmarkScreenlet
creates a new
BookmarkEvent instance with a bookmark’s data and then uses the
setCacheKey method to set the bookmark’s URL as the event’s cache key. The
Interactor’s start method takes this event as its argument:
BookmarkEvent event = new BookmarkEvent(url, title, folderId); event.setCacheKey(url); interactor.start(event);
Note that you don’t have to set a cache key to use offline mode. Instead, you
can pass the event to the
start method without calling
setCacheKey. However,
this means that you’ll only be able to access the most recent entity in the
cache.
That’s it! Your write Screenlet now supports offline mode. There’s one more detail to keep in mind, however, when using the Screenlet: syncing the cache with the server. You’ll learn about this next.
Sync the Cache with the Server
When using a write Screenlet that supports offline mode, new data written to the cache must also be synced with the server. The write Screenlets included with Liferay Screens do this for you. However, you must do this manually when using a custom write Screenlet. You should do this in the activity or fragment that uses the Screenlet–exactly where in this activity or fragment is up to you though.
To sync a write Screenlet’s data with the server manually, follow these steps:
- Retrieve the event that needs to be synced with the server. To do this, you must first get the cache key associated with the event. Then use the key as an argument to the
Cache.getObjectmethod.
- Call the Interactor with the event. This syncs the data with the server.
For example, the following code uses the
Cache.findKeys method to retrieve all
BookmarkEvent keys in the cache. The loop that follows then retrieves the
event that corresponds to each key, and syncs it to the server by calling the
Interactor:
String[] keys = Cache.findKeys(BookmarkEvent.class, groupId, userId, locale, 0, Integer.MAX_VALUE); for (String key : keys) { BookmarkEvent event = Cache.getObject(BookmarkEvent.class, groupId, userId, key); new AddBookmarkInteractor().execute(event); }
Note that if you opted not to set a cache key in your Screenlet class, you can
pass
null in place of a key.
Also note that you can use Android’s
SharedPreferences APIs as an alternative
way to store and retrieve cache keys. For example, the following code stores
cache keys in shared preferences:
SharedPreferences sharedPreferences = getSharedPreferences("MY_PREFERENCES", Context.MODE_PRIVATE); HashSet<String> values = new HashSet<>(); sharedPreferences.edit().putStringSet("keysToSync", values).apply();
You can then retrieve the keys as you would retrieve any other key-value set from shared preferences:
SharedPreferences sharedPreferences = getSharedPreferences("MY_PREFERENCES", Context.MODE_PRIVATE); HashSet<String> keysToSync = sharedPreferences.getStringSet("keysToSync", new HashSet<>());
Next, you’ll learn how to add offline mode support to read Screenlets.
Adding Offline Mode Support to Read Screenlets
Implementing offline mode support in a read Screenlet is almost identical to doing so in a write Screenlet. There are two small differences, though:
You can still pass arguments to the Interactor with var args instead of an event.
The Interactor class must extend
BaseCacheReadInteractor, which forces you to implement the
getIdFromArgsmethod. This method takes the var args passed to the Interactor so you can return the argument that identifies your entity. Note that because this method requires you to return a
String, you’ll often use
String.valueOfto return non-string arguments as a string. For example, the
getIdFromArgsimplementation in Comment Display Screenlet’s
CommentLoadInteractorretrieves the comment ID (a
long) from the first argument and then returns it as a
String:
@Override protected String getIdFromArgs(Object... args) { long commentId = (long) args[0]; return String.valueOf(commentId); }
That’s it! Next, you’ll learn about list Screenlets and offline mode support.
Adding Offline Mode Support to List Screenlets
A list Screenlet is a special type of read Screenlet that displays entities in a
list. Recall from the
list Screenlet creation tutorial
that list Screenlets have a model class that encapsulates entities retrieved
from the server. To support offline mode, a list Screenlet’s event class must
extend
ListEvent with the model class as a type argument. This event class
also needs three things:
- A default constructor
- A
getListKeymethod that returns a unique ID to store the entity with
- A
getModelmethod that returns the model instance
The list Screenlet creation tutorial contains example model and event classes that support offline mode for Bookmark List Screenlet. Click the following links to see the sections in the tutorial that show you how to create these classes:
And that’s all! Now you know how to support offline mode in your Screenlets.
Related Topics
Using Offline Mode in Android
Architecture of Offline Mode in Liferay Screens
Creating Android Screenlets
Creating Android List Screenlets | https://help.liferay.com/hc/ja/articles/360017881432-Adding-Offline-Mode-Support-to-Your-Android-Screenlet | CC-MAIN-2022-27 | refinedweb | 2,052 | 55.03 |
any style sheets referenced in XSLT import and include elements. If this is null, external resources are not resolved.
The XslCompiledTransform class supports the XSLT 1.0 syntax. The XSLT style sheet must use the namespace.
The style sheet loads from the current node of the XmlReader through all its children. This enables you to use a portion of a document as the style sheet. After the Load method completes, the XmlReader is positioned on the next node after the end of the style sheet. If the end of the document is reached, the XmlReader is positioned at the end of file (EOF).
The following example loads a style sheet and enables support for XSLT scripting.
//; XmlReader reader = XmlReader.Create(""); // Create the XsltSettings object with script enabled. XsltSettings settings = new XsltSettings(false,true); // Load the style sheet. xslt.Load(reader, settings, resolver);. | https://msdn.microsoft.com/en-us/library/ms163427(v=vs.90).aspx | CC-MAIN-2017-43 | refinedweb | 142 | 69.28 |
Home -> Community -> Usenet -> comp.databases.theory -> Re: Date's First Great Blunder
"Dawn M. Wolthuis" <dwolt_at_tincat-group.com> wrote in message news:c5k6rt$6gp$1_at_news.netins.net...
> OK. I'll yield to your expertise on that. I have examples in hand with
> such a pattern, and have done it myself (but am, admittedly a beginner
Java
> coder), but have not done it with an RDBMS.
There are variations on such a pattern, but all of them revolve around putting as little code into each class as possible, or better yet using tags and generating something, or using Reflection, etc. Even generating "persistable" subclasses from the "pure" classes.
Also, I wasn't trying to be disparaging to beginners.
> >.
>
> I suspect, then, that is true if the data model is relational.
Definitely - although I've encountered similar ugly problems even with XML persistence which, even though XML corresponds much better to traditional objects than either does to relational, has ugly issues of its own.
> > That set also has a specification. And a type has operators too.
>
> So does it "work for you" to consider a class to be the very same thing as
a
> type?
Yes - I focus on types, making my classes immutable whenever possible. But then there's no real analog to relations, so one ends up with various other objects (that fit into various patterns) to fill that gap. But there are many ways to skin that cat, most poorly studied (everyone has a great suggestion on how to do it), and all suggest the absence of... well, you can fill in the blank.
And all of that is just for persistence - the value of constraints is another large gap. Most OO languages are pure implementation, which gives much room for programmers to blunder and violate any number of useful principles.
> > How does a type differ from its specification? See "set intension" vs
> > "set extension."
>
> OK -- I'm not familiar with those terms.
Roughly, intension is the set's formulaic definition while extension is the enumerated list of elements (e.g. "{x | x>1}" vs. "{2, 3, 4, ...}" ).
> > > Or in the same for as other data, agreed. Code is metadata..
> >
> > About what?
>
> The software "object" for which it is the specification.
So code is the specification for a software object? I take it because object is quoted that you mean more than OO-style objects?
I'm not just splitting hairs - I think these are genuine issues. I don't think code is metadata - I think it's implementation of a "spec". (Spec meaning, loosely, constraints such as preconditions, postconditions, invariants.) "Declarative code" is the spec - if you have a declarative language, no separate implementation is needed (and in any event, in Tutorial D and such the implementation is separate and outside the model).
> > What data does it concern?
>
> The data that is software.
Code can't be considered to be metadata about software-as-data, because typically the code is the software. And in any event, (procedural) code is difficult to reason about, which is why specs are useful.
> > Another flaw in OO is its (usual)
> > requirement that every bit of code be tied to one class, regardless of
how
> > many classes it concerns.
>
> Given the amount of reuse in OO, I don't understand this angle. Can you
> give an example of this flaw?
As an aside, check out some "functional" languages (Haskell, ML, Lisp) for a look at effective reuse without objects. Reuse is hard for human reasons, and OO doesn't really help much. Inheritance, the unique OO spin on reuse, is overrated and, done poorly, undermines other useful properties like substitutability and interface definitions.
But regarding my statement, it's impossible in many OO languages to define a function without a preferred parameter. I can't do "doFoo(x, y)", but instead have to do either "x.doFoo(y)" or "y.doFoo(x)". In my experience, small adjustments in requirements (i.e. normal app evolution) can really play havoc with your code, because you've had to allocate each and every bit of functionality to a single class - when things change, the definition of which class is "primary" can easily change. Then you have to move the method (if you want to stay sane). Then the cascade starts.
If your class is a type (domain), then you're less likely to have this problem, since its methods will refer to it and it alone. It's the interfaces between classes that cause the problem, or rather the need to choose Class A or Class B as the "home" for every function. In the assignment "x = 1.2 + 3", should the + operator be assigned to the RATIONAL class (for the 1.2 literal) or the INTEGER class (for the 3 literal)?
I do appreciate a namespace structuring (a la Java packages). But the "everything is an object" philosophy (and its variant "everything belongs to one object class") is very limiting. It certainly hampers the refactoring that is the darling of agile methodologies (the darling of OO developers), resulting in a lot more work.
Original text of this message | http://www.orafaq.com/usenet/comp.databases.theory/2004/04/14/0442.htm | CC-MAIN-2014-15 | refinedweb | 853 | 66.13 |
So I wrote a 2D array using Java that stores the names of songs in each of its slots. I have to print these songs using a toString() method and nested for-each loops. I have no idea as to how to do this. Help?
public class Jukebox { public Jukebox() { String[][] songList = new String[2][3]; for ( String[] row : songList) { for ( String column : row) { songList[0][0] = new String( "Hello" ); songList[0][1] = new String( "On My Mind" ); songList[0][2] = new String( "Hotel Ceiling" ); songList[0][3] = new String( "I Wish" ); songList[1][0] = new String( "No Air" ); songList[1][1] = new String( "Monsters" ); songList[1][2] = new String( "Not Afraid" ); songList[1][3] = new String( "Wake Up" ); songList[2][0] = new String( "Model" ); songList[2][1] = new String( "Thank You" ); songList[2][2] = new String( "Apologize" ); songList[2][3] = new String( "Fireflies" ); } } } public String toString() { for ( String[] row1 : songList) { for ( String column1 : row) { System.out.print( column1 + " " ); } System.out.println( "\n" ); } } } | http://www.devsplanet.com/question/35281525 | CC-MAIN-2017-09 | refinedweb | 163 | 78.52 |
The Oxford
Handbook of
Eye Movements
00-Liversedge-FM.indd i
7/21/2011 10:14:01 AM
This page intentionally left blank
00-Liversedge-FM.indd xx
7/21/2011 10:14:03 AM
The Oxford
Handbook of
Eye Movements
Edited by
Simon P. Liversedge
School of Psychology, University of Southampton,
Southampton, UK
Iain D. Gilchrist
School of Experimental Psychology,
University of Bristol, Bristol, UK
Stefan Everling
University of Western Ontario,
London, Ontario, Canada
1
00-Liversedge-FM.indd iii
7/21/2011 10:14:02 AM
1ar es Salaam Delhi Florence Hong Kong Istanbu
Karachi Kuala Lumpur Madrid Melbourne Mexico City
Chapters 1–5 © Oxford University Press, 2011
Chapter 6 © Elsevier, 2009
Chapters 7–54 © Oxford University Press, 2011
The moral rights of the authors have been asserted
Database right Oxford University Press (maker)
First published available
ISBN 978–0–19–953978–9
10 9 8 7 6 5 4 3 2 1
Typeset in Minion by Glyph International, Bangalore, India
Printed in Great Britain
on acid-free paper by
CPI Antony Rowe, Chippenham, Wiltshire
Whilst every effort has been made to ensure that the contents of this book are as complete,
accurate, and up-to-date as possible at the date of writing, Oxford University Press
is not able to give any guarantee or assurance that such is the case. Readers are urged
to take appropriately qualified medical advice in all cases. The information in this book is
intended to be useful to the general reader, but should not be used as a means of
self-diagnosis or for the prescription of medication.
00-Liversedge-FM.indd iv
7/21/2011 10:14:02 AM
Preface
The measurement of eye movements as an experimental method in the field of psychology is increasing at an exponential rate. For example, thirty years ago it was possible to count the number of eye
movement laboratories in Psychology Departments in the UK on the fingers of one (or maybe two)
hands. Today, the majority of Psychology Departments have some form of eye tracking device that is
used in research. Indeed, there are many such devices in research departments associated with disciplines other than psychology (e.g., Computer Science, Engineering, etc). It is fair to say that eye movement methodology is now firmly established as a core tool in the experimentalist’s armoury. To those
who conduct eye movement research, this exponential growth is not at all surprising given the very
tight relationship between eye movements and many aspects of human cognitive processing. This is
most obvious when considering complex visual cognitive processes (e.g., reading, problem solving
etc.) in which higher order, abstract mental representations directly influence oculomotor behaviour.
More recently, however, researchers have started to establish that eye movements are vital for, and
informative of, a much greater breadth of human psychological processes (e.g. shared social attention,
interpersonal communication, etc.). In addition the eyes have a simple, well-defined repertoire of
movements controlled by oculomotor neurons inside the cranium that are accessible to electrophysiological techniques. By recording from these and other neurons in alert animals, it has been possible
to reveal many of the details of the motor and premotor circuitry that controls eye movements. This
has resulted in a considerable understanding of motor control in general and in the pathophysiology
of many visual and eye movement disorders. The saccadic circuitry is now understood at a level that
allows us to investigate the neural basis of higher cognitive processes, including target selection, working memory, and response suppression. Without question, the field continues to expand, and this
expansion has occurred as a consequence of how useful the methodology has been in furthering
research, and how measurement techniques have improved and become simpler to use.
This rapid expansion in the field of eye movement research has two distinct consequences. First,
researchers who haven’t previously recorded eye movements as part of their research are increasingly
doing so, and second it is becoming increasingly difficult for established researchers in the area to
keep abreast of progress or discoveries in other areas of eye movement research. As a result of these
perceived needs we felt it was timely to produce a handbook comprised of chapters by experts that are
representative of the main areas of research in the field. The current volume, the Oxford Handbook
on Eye Movements, is the product of that decision. When we set out to produce the Handbook, we
had a clear set of objectives in mind. It needed to provide wide-ranging coverage, coverage that is
representative of the breadth of the field at the moment. The chapters needed to be written by leading
experts in the field. The content of the chapters needed to be up to date and informative both to other
experts working in different areas of the field, and to advanced undergraduate and postgraduate
students with an interest in eye movements. In summary, the Handbook needed to represent a snapshot of the current state of the field, a cross section of the work that is currently underway in the eye
movement community.
00-Liversedge-FM.indd v
7/21/2011 10:14:03 AM
vi · Preface
In contributing a chapter, authors were tasked with providing a concise, to the point, high quality
review of one or more particular issues that are currently of theoretical significance. All of the chapters were peer reviewed and thorough revisions were carried out on the basis of the reviews. The
result, in our view, is a series of pieces of high quality writing delivered by individuals who maintain
a very high profile in the field. The Handbook is a blend of both methodologically motivated content
and theoretical analysis. The Handbook represents a significant, broad based, theoretical volume
comprised of individual chapters focused on research that shares a common methodology.
The Handbook is structured around seven broad themes. In Part 1, we have two chapters offering
overviews of eye movements in different species and of the history of eye movement research and five
chapters that introduce different types of eye movements. Part 2 delivers a series of 14 chapters in
which the anatomy and neural mechanisms underlying oculomotor control are discussed in detail.
Part 3 contains six chapters detailing the relationship between eye movements and attention, and in
Part 4 there are seven chapters that consider eye movements in relation to visual cognitive processing. In Part 5, five chapters cover aspects of development and pathology, and eye movements in
special populations. In Parts 6 and 7 (eight and seven chapters respectively), the topic of reading is
addressed, with Part 6 covering issues of oculomotor control in relation to reading, and the latter
dealing with eye movements and their relationship to issues associated with linguistic processing. In
total, the Handbook contains 54 Chapters that deliver a comprehensive coverage of the field.
Inevitably there are areas of research which we have not included. In most cases this was to avoid
producing a handbook that was simply too big. One example would be the limited coverage of eye
movement research in an applied context (e.g., sports, engineering, ergonomics). The other obvious
omission is the exclusion of a broad coverage of the neurology of eye movements.
Producing a volume of this magnitude would not be possible without the support of a dedicated
team. The Editors are extremely grateful to Gwen Gordon who provided a tremendous amount of
editorial assistance throughout. We would also like to thank Pippa Smith and Kathryn Smith who also
assisted with editorial duties. The team at Oxford University Press have provided excellent support
from the very beginning to the end of this project, and we are extremely grateful for their help and
patience. Particularly, we would like to thank Martin Baum, Charlotte Green, Carol Maxwell and Priya
Sagayaraj. We are also very grateful to Gerry Altmann who came up with the word cloud design for the
front cover. We would also like to thank all the contributors to the volume – without them the
Handbook would not have been possible. When we initially approached potential contributors to the
volume, we were curious to see what the uptake to our invitation would be – to what extent would
colleagues in the field not only be responsive to our request to contribute, but also take their task
earnestly? It was a great pleasure to us that we had an overwhelmingly positive response to our invitation, and we were delighted that all of the authors delivered such high quality chapters. We are also
grateful to the authors for taking the peer review process seriously both by acting as reviewers and
responding carefully and diligently to the reviews of their own work. We know that this process has
contributed significantly to the quality of the finalised chapters. Again, we are very grateful to the
authors for making the effort to write such good chapters. Finally, it is likely that we have managed to
forget to thank a number of people who have helped along the way, and to those people, we apologise.
In summary, we believe that together we have produced an Oxford Handbook on Eye Movements
that is a high quality comprehensive volume. We anticipate that it will be used as a text by undergraduate and postgraduate students, and eye movement researchers alike to obtain synopses of specific topics
currently under investigation in the eye movement community. We hope that you enjoy reading it.
Simon P. Liversedge
Iain D. Gilchrist
Stefan Everling
00-Liversedge-FM.indd vi
7/21/2011 10:14:03 AM
Contents
List of Contributors xi
List of Abbreviations xv
Part 1: The eye movement repertoire
1 Oculomotor behaviour in vertebrates and invertebrates 3
Michael F. Land
2 Origins and applications of eye movement research 17
Nicholas J. Wade and Benjamin W. Tatler
3 Vestibular response 45
Bernhard J.M. Hess
4 The optokinetic reflex 65
C. Distler and K.-P. Hoffmann
5 Saccades 85
Iain D. Gilchrist
6 Microsaccades 95
Susana Martinez-Conde and Stephen L. Macknik
7 Ocular pursuit movements 115
Graham R. Barnes
Part 2: Neural basis of eye movements
8 The oculomotor plant and its role in three-dimensional eye orientation 135
Dora E. Angelaki
9 Brainstem pathways and premotor control 151
Kathleen E. Cullen and Marion R. Van Horn
10 The oculomotor cerebellum 173
Peter Thier
11 The superior colliculus 195
Brian J. White and Douglas P. Munoz
00-Liversedge-FM.indd vii
7/21/2011 10:14:03 AM
viii · Contents
12 Saccadic eye movements and the basal ganglia 215
Corinne R. Vokoun, Safraaz Mahamed, and Michele A. Basso
13 Thalamic roles in eye movements 235
Masaki Tanaka and Jun Kunimatsu
14 The role of posterior parietal cortex in the regulation of
saccadic eye movements 257
Martin Paré and Michael C. Dorris
15 Frontal cortex and flexible control of saccades 279
Kevin Johnston and Stefan Everling
16 Eye-head gaze shifts 303
Brian D. Corneil
17 Interactions of eye and eyelid movements 323
Neeraj J. Gandhi and Husam A. Katnani
18 Neural control of three-dimensional gaze shifts 339
J. Douglas Crawford and Eliana M. Klier
19 The neural basis of saccade target selection 357
Jeffrey D. Schall and Jeremiah Y. Cohen
20 Testing animal models of human oculomotor control with neuroimaging 383
Clayton E. Curtis
21 Eye movements and transcranial magnetic stimulation 399
René M. Müri and Thomas Nyffeler
Part 3: Saccades and attention
22 Determinants of saccade latency 413
Petroc Sumner
23 Saccadic decision-making 425
Casimir J.H. Ludwig
24 Models of overt attention 439
Wilson S. Geisler and Lawrence K. Cormack
25 The intriguing interactive relationship between visual attention and
saccadic eye movements 455
Árni Kristjánsson
26 Oculomotor inhibition of return 471
Raymond M. Klein and Matthew D. Hilchey
27 Multisensory saccade generation 493
Richard Amlôt and Robin Walker
Part 4: Visual cognition and eye movements
28 Visual stability 511
Bruce Bridgeman
00-Liversedge-FM.indd viii
7/21/2011 10:14:03 AM
Contents · ix
29 Eye movements and visual expertise in chess and medicine 523
Eyal M. Reingold and Heather Sheridan
30 Eye movements both reveal and influence problem solving 551
Michael J. Spivey and Rick Dale
31 Eye movements and change detection 563
James R. Brockmole and Michi Matsukura
32 Eye movements and memory 579
Matthew S. Peterson and Melissa R. Beck
33 Eye movements and scene perception 593
John M. Henderson
34 Mechanisms of gaze control in natural vision 607
Mary M. Hayhoe and Dana H. Ballard
Part 5: Eye movement pathology and development
35 Development from reflexive to controlled eye movements 621
Beatriz Luna and Katerina Velanova
36 Children’s eye movements during reading 643
Hazel I. Blythe and Holly S.S.L. Joseph
37 Oculomotor developmental pathology: an ‘evo-devo’ perspective 663
Chris Harris
38 Eye movements in psychiatric patients 687
Jennifer E. McDowell, Brett A. Clementz, and John A. Sweeney
39 Eye movements in autism spectrum disorder 709
Valerie Benson and Sue Fletcher-Watson
Part 6: Eye movement control during reading
40 On the role of visual and oculomotor processes in reading 731
Françoise Vitu
41 Linguistic and cognitive influences on eye movements during reading 751
Keith Rayner and Simon P. Liversedge
42 Serial-attention models of reading 767
Erik D. Reichle
43 Parallel graded attention models of reading 787
Ralf Engbert and Reinhold Kliegl
44 Binocular coordination during reading 801
Julie A. Kirkby, Sarah J. White, and Hazel I. Blythe
45 Foveal and parafoveal processing during reading 819
Jukka Hyönä
46 Parafoveal-on-foveal effects on eye movements during reading 839
Denis Drieghe
00-Liversedge-FM.indd ix
7/21/2011 10:14:03 AM
x · Contents
47 Eye movements and concurrent event-related potentials: Eye fixation-related
potential investigations in reading 857
Thierry Baccino
Part 7: Language processing and eye movements
48 Lexical influences on eye movements in reading 873
Barbara J. Juhasz and Alexander Pollatsek
49 Syntactic influences on eye movements during reading 895
Charles Clifton, Jr and Adrian Staub
50 The influence of implausibility and anomaly on eye movements
during reading 911
Tessa Warren
51 The influence of focus on eye movements during reading 925
Ruth Filik, Kevin B. Paterson, and Antje Sauermann
52 Eye movements in dialogue 943
Helene Kreysa and Martin J. Pickering
53 Eye movements during Chinese reading 961
Chuanli Zang, Simon P. Liversedge, Xuejun Bai, and Guoli Yan
54 The mediation of eye movements by spoken language 979
Gerry T.M. Altmann
Author Index 1005
Subject Index 1011
00-Liversedge-FM.indd x
7/21/2011 10:14:03 AM
List of Contributors
Gerry T.M. Altmann
Department of Psychology,
University of York, UK
Bruce Bridgeman
Psychology Department,
University of California, Santa Cruz, USA
Richard Amlôt
Department of Psychology, Royal Holloway,
University of London, UK
James R. Brockmole
Department of Psychology,
University of Notre Dame, USA
Dora E. Angelaki
Department of Neurobiology,
Washington University Medical School, USA
Brett A. Clementz
Departments of Psychology and
Neuroscience, Bio-Imaging Research Center,
University of Georgia, USA
Thierry Baccino
University of Paris 8,
Cité des sciences et de l’industrie de la Villette,
France
Xuejun Bai
Academy of Psychology and Behavior,
Tianjin Normal University, P.R. China
Dana H. Ballard
Department of Computer Science,
University of Texas at Austin, USA
Graham R. Barnes
Faculty of Life Sciences,
University of Manchester, UK
Charles Clifton, Jr
Department of Psychology,
University of Massachusetts Amherst, USA
Jeremiah Y. Cohen
Department of Psychology,
Vanderbilt University, USA
Lawrence K. Cormack
Center for Perceptual Systems and
Department of Psychology,
The University of Texas at Austin, USA
Michele A. Basso
Department of Neuroscience,
University of Wisconsin, USA
Brian D. Corneil
Departments of Physiology & Pharmacology,
and Psychology,
University of Western Ontario, Canada
Melissa R. Beck
Department of Psychology,
Louisiana State University, USA
J. Douglas Crawford
Department of Psychology,
York University, Canada
Valerie Benson
School of Psychology,
University of Southampton, UK
Kathleen E. Cullen
Department of Physiology,
McGill University, Canada
Hazel I. Blythe
School of Psychology,
University of Southampton, UK
Clayton E. Curtis
Psychology & Neural Science,
New York University, USA
00-Liversedge-FM.indd xi
7/21/2011 10:14:03 AM
xii · List of Contributors
Rick Dale
Cognitive and Information Sciences,
University of California, Merced, USA
Bernhard J.M. Hess
Department of Neurology,
University Hospital Zurich, Switzerland
C. Distler
Faculty of Biology and Biotechnology,
Ruhr University, Germany
Matthew D. Hilchey
Department of Psychology,
Dalhousie University, Canada
Michael C. Dorris
Centre for Neuroscience Studies,
Queen’s University, Canada
K.-P. Hoffmann
Faculty of Biology and Biotechnology,
Ruhr University, Germany
Denis Drieghe
School of Psychology, University of
Southampton, UK
Jukka Hyönä
Department of Psychology,
University of Turku, Finland
Ralf Engbert
Department of Psychology,
University of Potsdam, Germany
Kevin Johnston
Centre for Neuroscience Studies,
Queen’s University, Canada
Stefan Everling
University of Western Ontario,
London, Ontario, Canada
Holly S.S.L. Joseph
Department of Experimental Psychology,
University of Oxford, UK
Ruth Filik
School of Psychology,
University of Nottingham, UK
Barbara J. Juhasz
Department of Psychology,
Wesleyan University, USA
Sue Fletcher-Watson
Moray House School of Education,
University of Edinburgh, UK
Husam A. Katnani
Department of Bioengineering,
University of Pittsburgh, USA
Neeraj J. Gandhi
Department of Otolaryngology,
University of Pittsburgh, USA
Julie A. Kirkby
Department of Psychology,
Bournemouth University, UK
Wilson S. Geisler
Center for Perceptual Systems and
Department of Psychology,
The University of Texas at Austin, USA
Raymond M. Klein
Department of Psychology,
Dalhousie University, Canada
Iain D. Gilchrist
School of Experimental Psychology,
University of Bristol, UK
Chris Harris
SensoriMotor Laboratory,
School of Psychology,
University of Plymouth, UK
Mary M. Hayhoe
Center for Perceptual Systems and
Department of Psychology,
The University of Texas at Austin, USA
John M. Henderson
Department of Psychology,
University of South Carolina, USA
00-Liversedge-FM.indd xii
Reinhold Kliegl
Department of Psychology,
University of Potsdam, Germany
Eliana M. Klier
Department of Anatomy and Neurobiology,
Washington University School of Medicine,
USA
Helene Kreysa
Cognitive Interaction Technology
Center of Excellence,
Bielefeld University, Germany
Árni Kristjánsson
Department of Psychology,
University of Iceland, Iceland
7/21/2011 10:14:03 AM
List of Contributors · xiii
Jun Kunimatsu
Department of Physiology,
Hokkaido University School of Medicine,
Japan
Michael F. Land
School of Life Sciences,
University of Sussex, UK
Simon P. Liversedge
School of Psychology,
University of Southampton, UK
Casimir J.H. Ludwig
School of Experimental Psychology,
University of Bristol, UK
Beatriz Luna
Laboratory of Neurocognitive Development,
Western Psychiatric Institute and Clinic,
University of Pittsburgh Medical Center,
University of Pittsburgh, USA
Martin Paré
Centre for Neuroscience Studies,
Queen’s University, Canada
Kevin B. Paterson
School of Psychology,
University of Leicester, UK
Matthew S. Peterson
Department of Psychology,
George Mason University, USA
Martin J. Pickering
Department of Psychology,
University of Edinburgh, UK
Alexander Pollatsek
Department of Psychology,
University of Massachusetts
Keith Rayner
Department of Psychology,
University of California, San Diego, USA
Stephen L. Macknik
Barrow Neurological Institute,
Phoenix, USA
Erik D. Reichle
Department of Psychology,
University of Pittsburgh, USA
Safraaz Mahamed
Department of Neuroscience,
University of Wisconsin, USA
Eyal M. Reingold
Department of Psychology,
University of Toronto, Canada
Susana Martinez-Conde
Laboratory of Visual Neuroscience,
Division of Neurobiology,
Barrow Neurological Institute, Phoenix, USA
Antje Sauermann
Department of Linguistics,
University of Potsdam, Germany
Michi Matsukura
Department of Psychology,
University of Iowa, USA
Jennifer E. McDowell
Departments of Psychology and
Neuroscience, Bio-Imaging Research Center,
University of Georgia, USA
Douglas P. Munoz
Centre for Neuroscience Studies,
Queen’s University, Canada
René M. Müri
Perception and Eye Movement Laboratory,
Department of Neurology,
University of Bern, Switzerland
Thomas Nyffeler
Perception and Eye Movement Laboratory,
Department of Neurology,
University of Bern, Switzerland
00-Liversedge-FM.indd xiii
Jeffrey D. Schall
Department of Psychology,
Vanderbilt University, USA
Heather Sheridan
Department of Psychology,
University of Toronto, Canada
Michael J. Spivey
Cognitive and Information Sciences,
University of California, Merced, USA
Adrian Staub
Department of Psychology,
University of Massachusetts Amherst, USA
Petroc Sumner
School of Psychology,
Cardiff University, UK
John A. Sweeney
Department of Psychiatry and Pediatrics,
University of Texas Southwestern,
Dallas, USA
7/21/2011 10:14:03 AM
xiv · List of Contributors
Masaki Tanaka
Department of Physiology,
Hokkaido University School of Medicine, Japan
Nicholas J. Wade
School of Psychology,
University of Dundee, UK
Benjamin W. Tatler
School of Psychology,
University of Dundee, UK
Robin Walker
Department of Psychology, Royal Holloway,
University of London, UK
Peter Thier
Department of Cognitive Neurology,
Hertie-Institute for Clinical Brain Research,
University of Tübingen, Germany
Tessa Warren
Learning Research and Development
Center and Departments of Psychology
and Linguistics,
University of Pittsburgh, USA
Marion R. Van Horn
Department of Physiology,
McGill University, Canada
Katerina Velanova
Laboratory of Neurocognitive Development,
Western Psychiatric Institute and
Clinic, University of Pittsburgh Medical Center,
University of Pittsburgh, USA
Françoise Vitu
Laboratoire de Psychologie Cognitive,
CNRS UMR 6146, Université de Provence,
Marseille, France
Corinne R. Vokoun
Department of Neuroscience,
University of Wisconsin, USA
00-Liversedge-FM.indd xiv
Brian J. White
Centre for Neuroscience Studies,
Queen’s University, Canada
Sarah J. White
School of Psychology,
University of Leicester, UK
Guoli Yan
Academy of Psychology and Behavior,
Tianjin Normal University, P.R. China
Chuanli Zang
Academy of Psychology and Behavior,
Tianjin Normal University, P.R. China
7/21/2011 10:14:03 AM
List of Abbreviations
2D
two-dimensional
3D
three-dimensional
AC
attentional capture
ACC
anterior cingulate cortex
ADHD
attention deficit hyperactivity disorder
AES
ectosylvian sulcus
AFC
alternative forced choice
AoA
age of acquisition
AOS
accessory optic system
APB
2-amino-4-phosphonobutyric acid
AS
Asperger’s syndrome
ASD
autism spectrum disorder
BCI
brain–computer interface
BG
basal ganglia
BN
burst neuron
BOLD
blood oxygen-level dependent
BSS
blind source separation
BTN
burst-tonic neuron
CCN
central caudal nucleus
CDF
cumulative distribution function
CEF
cingulate eye field
CF
climbing fibre
cFN
caudal fastigial nucleus
CL
central lateral
CM
central median
CMAd
dorsal cingulate motor area
CMAr
rostral cingulate motor areas
CMAv
ventral cingulate motor areas
cMRF
central mesencephalic reticular formation
00-Liversedge-FM.indd xv
7/21/2011 10:14:03 AM
xvi · List of Abbreviations
CNV
contingent negative variation
DA
dopamine
dLLBN
dorsal excitatory burst neuron
DLPFC
dorsolateral prefrontal cortex
DLPN
dorsolateral pontine nucleus
DPF
dorsal paraflocculus
dSC
deep layers of the superior colliculus
DTI
diffusion tensor imaging
dTMS
double-pulse transcranial magnetic stimulation
DTN
dorsal terminal nucleus
EBN
excitatory burst neuron
EFRP
eye fixation-related potential
EFT
embedded figures test
EH
eye-head
eLLBN
excitatory long-lead burst neuron
EMG
electromyography
EOG
electro-oculography
ERP
event-related potential
FEF
frontal eye field
fMRI
functional magnetic resonance imaging
floccular region
FTN
flocculus target neuron
GABA
gamma-aminobutyric acid
GBS
gamma-band synchronization
GC
granule cell
GFS
general flash suppression
GP
globus pallidus
GPe
globus pallidus external
GPi
globus pallidus internal
HCI
human–computer interface
hOKR
horizontal optokinetic reflex
IBN
inhibitory burst neuron
ICA
independent component analysis
iGBR
induced gamma-band response
IML
internal medullary lamina
INC
interstitial nucleus of Cajal
INS
infantile nystagmus syndrome
IOR
inhibition of return
IOVP
inverted optimal viewing position
IPSP
inhibitory postsynaptic potential
ISI
interstimulus interval
00-Liversedge-FM.indd xvi
7/21/2011 10:14:03 AM
List of Abbreviations · xvii
ISL
intended saccade length
IVN
inferior (or descending) vestibular nucleus
LD
lateral dorsal
LFP
local field potential
LGN
lateral geniculate nucleus
LIP
lateral intraparietal
LLBN
long-lead burst neuron
LP
levator palpebrae
LTD
long-term depression
LTM
long-term memory
LTN
lateral terminal nucleus
LVN
lateral vestibular nucleus
MD
mediodorsal
MEG
magnetoencephalography
mhOKR
monocular horizontal optokinetic reflex
MLF
medial longitudinal fasciculus
mOKR
monocular optokinetic reflex
MRF
mesencephalic reticular formation
MST
medial superior temporal
MT
middle temporal
MTN
medial terminal nucleus
MVN
medial vestibular nucleus
nBOR
nucleus of the basal optic root
nLM
nucleus lentiformis mesencephali
NOT
nucleus of the optic tract
nPH
nucleus prepositus hypoglossi
NR
near response
NRTP
nucleus reticularis tegmentis pontis
OCD
obsessive–compulsive disorder
OKN
optokinetic nystagmus
OKR
optokinetic reflex
OO
orbicularis oculi
OPN
omnipause neuron
OVP
optimal viewing position
PAN
phasically active neuron
PCN
paracentral nucleus
PDF
probability density function
PET
positron emission tomography
Pf
parafascicular
PFC
prefrontal cortex
PG
processing gradient
00-Liversedge-FM.indd xvii
7/21/2011 10:14:03 AM
xviii · List of Abbreviations
PLP
preferred landing position
PPC
posterior parietal cortex
PPN
pedunculopontine nucleus
PPRF
paramedian pontine reticular formation
PSP
progressive supranuclear palsy
PSS
posterior suprasylvian
PVP
vestibular pause neuron
PVP
preferred viewing position
RB
range bias
RCP
rectus capitis posterior
RDE
remote distractor effect
RDK
random dot kinetogram
RF
receptive field
riMLF
rostral interstitial nucleus of the medial longitudinal fasciculus
RIP
raphe interpositus
ROI
region of interest
RT
response time
rTMS
repetitive transcranial magnetic stimulation
RVOR
rotational vestibulo-ocular reflex
SAI
stratum album intermediale
SAP
stratum album profundum
SAS
sequential attention shifts
SBN
saccadic burst neuron
SC
superior colliculus
SCi
intermediate layer of the superior colliculus
SCs
superficial layer of the superior colliculus
SEF
supplementary eye field
SGI
stratum griseum intermediale
SGP
stratum griseum profundum
SGS
stratum griseum superficiale
SIF
saccade initiation failure
SMA
supplementary motor area
SNc
substantia nigra pars compacta
SNr
substantia nigra pars reticulata
SO
stratum opticum
SOA
stimulus onset asynchrony
SOA
supraoculomotor nucleus
SPR
self-paced reading
SRE
saccadic range error
SRT
saccadic reaction time
SSRT
stop signal reaction time
00-Liversedge-FM.indd xviii
7/21/2011 10:14:03 AM
List of Abbreviations · xix
STC
sensory trigeminal complex
STM
short-term memory
STN
subthalamic nucleus
SVN
superior vestibular nucleus
SWM
spatial working memory
SZ
stratum zonale
TAN
tonically active neuron
TD
typically-developed/developing
TMS
transcranial magnetic stimulation
TVOR
translational vestibulo-ocular reflex
VA
ventroanterior
VL
ventrolateral
VLPFC
ventrolateral prefrontal cortex
VN
vestibular nucleus
VOR
vestibulo-ocular reflex
VPL
ventral posterolateral
VPM
ventral posteromedial
VTA
ventral tegmental area
WCC
weak central coherence
WM
working memory
00-Liversedge-FM.indd xix
7/21/2011 10:14:03 AM
This page intentionally left blank
00-Liversedge-FM.indd xx
7/21/2011 10:14:03 AM
PART 1
The eye movement
repertoire
01-Liversedge-01 (Part I).indd 1
7/11/2011 9:56:50 AM
This page intentionally left blank
00-Liversedge-FM.indd xx
7/21/2011 10:14:03 AM
CHAPTER 1
Oculomotor behaviour in
vertebrates and
invertebrates
Michael F. Land
Abstract
Humans use a ‘saccade and fixate’ strategy when viewing the world, with information gathered
during stabilized fixations, and saccades used to shift gaze direction as rapidly as possible. This strategy is shared by nearly all vertebrates, whether or not their eyes possess foveas. Remarkably, the same
combination is found in many invertebrates with eyes that resolve well. Cephalopod molluscs, decapod crustaceans, and most insects stabilize their eyes, head, or body against rotation while in motion,
and also make fast gaze-shifting saccades. Praying mantids, like primates, are also capable of smooth
tracking. Other invertebrates have eyes in which the retina is a long narrow strip, and these make
scanning movements at right angles to the strip. These include heteropod sea-snails, certain copepods, jumping spiders, mantis shrimps, and some water beetle larvae. Scanning speeds are always
just slow enough for resolution not to be compromised.
Humans and other primates
To begin this chapter I will summarize the human eye movement repertoire, and use it as a basis for
comparing the eye movement of other animals. Humans and higher primates have four kinds of eye
movements. Saccades are the fast eye movements that redirect gaze. They can have a wide variety of
amplitudes and for a given size have a defined duration and peak velocity. They occur up to four
times a second and while in progress subjects are effectively blind. Between saccades gaze is
held almost stationary (fixation) by slow stabilizing movements. These are brought about by two
reflexes: the vestibulo-ocular reflex (VOR) in which the semicircular canals measure head velocity,
and cause the eye muscles to move the eyes in the head at an equal and opposite velocity (see
also Cullen and Van Horn, Chapter 9, this volume); and the optokinetic reflex (OKN) in which
wide-field image velocity signals from retinal ganglion cells are fed back to the eye muscles to null
out any residual slip of the image across the retina (see also Distler and Hoffman, Chapter 4, this
volume). For small targets smooth pursuit is possible up to speeds of about 15°s–1. At higher velocities
pursuit starts to lag the target, and the tracking becomes interspersed with catch-up saccades which
take over completely above 100°s–1 (see also Barnes, Chapter 7, this volume). When a target moves
towards or away from the viewer it is tracked by vergence movements in which the eyes converge or
diverge, in contrast to all other eye movements which are conjugate, i.e. the eyes move in the same
direction.
01-Liversedge-01 (Part I).indd 3
7/11/2011 9:56:50 AM
4 · Michael F. Land
Other vertebrates
Mammals
Mammalian voluntary saccades are always conjugate, unlike other vertebrates where they are, to
varying degrees, independent in the two eyes. Spontaneous saccades are most prominent in primates,
and are associated with the deployment of the fovea to fixate new targets. Other mammals do not
have a deep fovea, although they may have ‘areas’ or ‘visual streaks’ with a somewhat elevated
ganglion cell density, and hence increased functional acuity. Spontaneous saccades are also seen in
carnivores such as cats and dogs, but are much less obvious in ungulates. Horses and cows, for example, will face and track an object of interest with their heads, whilst the laterally directed eyes perform
a typical counter-rotation, interspersed with resetting saccades, as the head rotates. The eyes are
simply retaining a fixed relationship with the surroundings, rather than being directed at targets.
Similarly, many non-predators, such as rabbits and most rodents, make few if any spontaneous
saccades when stationary but vigilant.
Birds
Birds have large eyes, light heads, and flexible necks, and typically move their eyes by moving their
heads. This leads to the very characteristic pattern of head saccades seen when birds are foraging or
watching out for predators. Most birds do have eye movements of limited extent (typically about
20°), but the main function of these is to ‘sharpen up’ the head saccades (Wallman and Letelier,
1993). The head starts to move but the eye initially counter-rotates (VOR); the eye then moves
rapidly to a new position, before counter-rotating again as the head catches up. This ensures that the
gaze change is extremely rapid, and stable fixations last longer than they would without the eye
movements. Pursuit movements, made by the head, are seen in some birds, particularly predators.
Although they can appear smooth, they actually consist of a succession of small saccadic head movements. It is not clear whether eye movements are ever involved in pursuit: they are certainly not
available in owls, where the eyes are so large that eye movements are impossible. Most birds have two
foveas in each eye: a central fovea directed anterolaterally and a temporal fovea directed forwards.
The temporal foveas of the two eyes are used binocularly, for example, when pecking at food. The
size of the binocular field is typically about 20°, but may be as much as 50° in owls. Many sea-birds
have a visual streak, supplementing or replacing the foveas, and corresponding to the location of
the horizon and the region of sea below it. Another characteristic behaviour, particularly of ground
feeding birds, is the pattern of translational head saccades (head-bobbing). When the bird walks
forward the neck moves the head backwards. It is then thrust rapidly forwards before moving
backwards again at about the same speed as the forward locomotion. The result is that the head is
stabilized in space for much of the time, presumably permitting an unblurred view of nearby objects
to the side. Birds in flight do not head-bob, except when landing. They do, however, make head
saccades and maintain rotationally stable fixations (Eckmeier et al., 2008).
Reptiles
Many reptiles sit for long periods without making spontaneous eye movements, although most are
capable of them, and they certainly have the usual vestibular and visual stabilizing reflexes. Lizards
make monocular saccades to fixate new targets. These are particularly obvious in chameleons,
in which independent saccadic eye movements are used to survey the surroundings with the welldeveloped foveas. Once an insect prey has been located the chameleon turns towards it, rotates the
two eyes forward, fixates the prey binocularly, and catches it with a ballistic extension of the tongue.
The distance of the prey is determined not by triangulation but by focus (Harkness, 1977). A typical
lizard has about 40° of eye movement, but in chameleons the turret-like eyes can be rotated through
180° horizontally and 90° vertically. Although chameleons give the impression that they are surveying
the two halves of the visual world simultaneously and independently, their attention is only directed
01-Liversedge-01 (Part I).indd 4
7/11/2011 9:56:50 AM
Oculomotor behaviour in vertebrates and invertebrates · 5
to one side at a time. While one eye is actively ‘looking’, the other is actually defocused (Ott et al., 1998).
Attention switches between eyes at approximately 1-s intervals.
Amphibia
According to Walls (1942) no amphibian is known to perform any eye movements other than
retraction and elevation. This is probably an overstatement: some tree frogs with visible slit pupils
can be seen to counter-rotate their eyes when the head rotates, but in general frogs and toads make
no spontaneous eye movements. Their strategy seems to be to let the stationary world go blank (as it
does with humans when the image is stabilized) so that they only see motion, which is likely to be
caused by either potential prey or a predator. During prey capture the head and body rotate to align
the prey with the tongue.
Fish
The majority of fish do not have a fovea and do not have eye movements that target particular
objects. The typical pattern of eye movements that accompany locomotion is for the eyes to make
a saccade in the direction of an upcoming turn, and then for the eyes to counter-rotate as the turn
100°
Goldfish
Rock crab
150°
100°
50°
50°
R Gaze
R Gaze
0°
0°
50°
Head
1 second
Head
10 seconds
50°
R Eye/Head
0°
0°
R Eye/Head
Blowfly
Cuttlefish
Head = gaze
Thorax
20°
T
H
H-T
100 ms
100°
10°
100°
20 deg
100 ms
Head
10 seconds
L Eye/Head
10°
Fig. 1.1 Examples of independently evolved ‘saccade and fixate’ strategies from four different
animal groups. Goldfish turning; redrawn from Easter et al. (1974). Reproduced with permission
from Journal of Experimental Biology, Paul, H., Nalbach, H-O., & Varjú, D., Eye movements in the rock
crab Pachygrapsus marmoratus walking along straight and curved paths, 1990, 154, pp. 81-97. jeb.
biologists.org. Blowfly in flight, inset shows the contribution of the neck (H-T); Reprinted by
permission from Macmillan Publishers Ltd: Nature, C. Schilstra and J.H. van Hateren, Stabilizing
gaze in flying blowflies, 395(6703), copyright (1998). Cuttlefish (gaze not shown); Reproduced with
permission from Journal of Experimental Biology, Land, M.F. Scanning eye movements in a heteropod
mollusc, 1982, 96, pp. 427-430. jeb.biologists.org All records are in the horizontal (yaw) plane.
01-Liversedge-01 (Part I).indd 5
7/11/2011 9:56:50 AM
6 · Michael F. Land
is made (Fig. 1.1). The saccades made by the two eyes are not usually synchronized, as they would
be in mammals. As Walls (1962) points out, this behaviour represents the origin of the saccade and
fixate strategy in vertebrates: ‘. . . the ancient and original function of the eye muscles was not really
to move the eye, but rather to hold it still with respect to the environment . . .’ Rotational movements
around the optic axis, tending to keep the dorsoventral axis of the eye aligned with gravity, are particularly noticeable when a fish tilts the body up or down, indicating the importance of the oblique
muscles. Predating fish bring the whole body into line with the prey. Some fish do have an area, or
even a fovea, located in the temporal retina which can be actively directed forward during prey
capture. Such fish often have a pear-shaped pupil with an open space nasal to the lens to permit a
forward view for the temporal retina. Fixation may then be monocular or binocular. Sea horses
(Hippocampus) and their relatives are unusual in having a well-developed fovea near the centre of
each retina, and their eye movements are very similar to those of chameleons. The turret-like eyes
make saccades several times a second, apparently quite independently in the two eyes.
Invertebrates
Eye movements are seen in those invertebrate groups that have well-developed vision. These include
arthropods of the three major groups (chelicerates, crustaceans, and insects), and some molluscs,
notably the cephalopods and some gastropod snails. Some of the patterns of eye movement are
remarkably similar to those of vertebrates, although independently evolved. For example, ‘saccade
and fixate’ behaviour is seen in cephalopod molluscs, crabs, and many insects. There are, however,
some oculomotor behaviours that are not seen in vertebrates, notably the scanning eye movements
seen in some spiders, several crustaceans, and certain insect larvae. These are associated with eyes in
which the retina is a linear strip rather than a two-dimensional surface, and scanning the strip across
the field of view provides the second dimension (Fig. 1.2).
Chelicerates
The chelicerates include the king crabs, scorpions, spiders, and mites. Of these only the spiders and
some water mites are known to move their eyes. Spiders have eight single-chambered eyes, six of which
(the secondary or side eyes) are fixed, but in some families the front eyes (anteromedian or principal
eyes) are moveable. Usually this is a simple scanning movement of a few degrees, confined to one plane.
The exceptions are the jumping spiders.
Jumping spiders
These spiders (Salticidae) stalk their prey—usually insects—in the way a cat would stalk a bird, but
like many predators they also have a elaborate courtship display, part of whose function is to make
sure the recipient knows that they are not for eating. When a movement is detected by the side eyes
the spider turns to face its cause with the principal eyes. These much larger eyes have a complex
repertoire of eye movements, which involve the same degrees of freedom as the human eye: left–right
and up–down rotations and torsion around the visual axis. Unlike vertebrates, the eye as a whole
does not rotate, but the retina is moved across the stationary image by a set of six eye muscles attached
to the eye tube. Each muscle is innervated by a single axon (Land, 1969). The principal eye retinae are
elongated vertical structures, with the receptors arranged in four layers, the deepest of which is about
200 cells high by five cells wide. When the animal is at rest the retinae make spontaneous excursions
of up to 50°, mostly in the horizontal plane, and these are mostly but not exclusively conjugate.
However, if a moving object is detected in the field of view of the (fixed) anterolateral eyes the highresolution central regions of the principal retinae are directed to it with a conjugate saccade. If the
target continues to move both retinae will track it smoothly. Thereafter the retinae will usually scan
the target using a stereotyped pattern of movements in which they oscillate laterally across the target
with a frequency of 0.5–1 Hz, while at the same time making 50° torsional movements around the
eye’s axis at a much slower frequency, between 0.2 and 0.07 Hz. (Fig. 1.2).
01-Liversedge-01 (Part I).indd 6
7/11/2011 9:56:50 AM
Oculomotor behaviour in vertebrates and invertebrates · 7
Labidocera
Oxygyrus
Anterior
Eye angle
Lateral
90°
Ventral
20°
Seconds
Seconds
Odontodactylus
Phidippus
Horizontal
movement
30
Vertical
C
Torsional
movement
E
A
0
80
50
110
B
D
CD
B
A
Horizontal
E
10°
10°
Horiz
Inclination
140
1 second
Stimulus
10 s
50°
Torsion
Fig. 1.2 Examples of animals with scanning eyes. Oxygyrus is a planktonic sea-snail. Views of the
eye when pointing downwards and laterally. The insert on the right shows the quadrant covered in an
upward scan. Below is a record of eight scans over an 18-s period. Reproduced with permission from
Journal of Experimental Biology, Land, M.F., The functions of eye and body movements in Labidocera
and other copepods, 1988, 140, pp. 381-391. jeb.biologists.org Labidocera is a copepod
crustacean. View of head from the side showing the eye-cup in two positions. Extended field of view
above the animal is shown on right. Below is a record of scans made over 5 s. Reproduced with
permission from, Journal of Experimental Biology, Land, M.F. Movements of the retinae of jumping
spiders in response to visual stimuli, 1969, 51, pp. 471-483. jeb.biologists.org Odontodactylus is a
mantis shrimp (stomatopod crustacean). Animal from the front showing resting orientation of the
mid-bands of the eyes. Below is a record of five scanning movements showing complex motion
around three axes; orientation of mid-band shown on right. With kind permission from Springer
Science+Business Media: Journal of Comparative Physiology A, The eye-movements of the mantis
shrimp Odontodactylus scyllarus (Crustacea: Stomatopoda), 167(2), 1990, M.F. Land. Phidippus is
a jumping spider (Salticidae). View from the front showing principal and anterolateral eyes, and
angles made by the legs. Below is a diagram and recording of the movements of the retinae of the
principal eyes when scanning a recently presented square target. After Land (1969).
01-Liversedge-01 (Part I).indd 7
7/11/2011 9:56:50 AM
8 · Michael F. Land
These scanning eye movements almost certainly represent a procedure for determining the
identity of the target. Experiments by the ethologist Oscar Drees in the 1950s had established that
the key feature that determines whether a male jumping spider will make a courtship display in
front of a picture of another spider is the presence of legs. Jumping spider legs typically make
angles of about 30° with the vertical axis of the body (many salticids also have body markings
making similar angles), and it seems that it is these linear contours that the scanning procedure is
designed to pick up. If an object of appropriate size lacks appropriate contours then it is treated as
potential prey and stalked. Colour markings, including ultraviolet, are also involved in identifying
conspecifics.
Crustacea
Daphnia
In spite of having a single cyclopean eye with only 22 ommatidia, the water flea Daphnia pulex has
an impressive repertoire of eye movements. There are ‘flicks’ made to brief flashes of light, sustained
‘fixations’ to light stimuli of long duration, and ‘tracking’ movements to moving light stimuli made
at about half the speed of the stimulus itself (Consi et al., 1990). There is a region of the eye, 80°
dorsal to its axis, described as the ‘null region’ from which eye movements are not evoked, and it is
proposed that flick and fixation movements rotate the eye so that a stimulus is brought into this
region, and is then tracked when it strays outside the boundaries of this zone. Since Daphnia lives on
suspended food particles the main role of the eyes is likely to be to orient the animal to the downwelling light while swimming.
Copepods
Most copepods are small filter-feeding crustaceans with insignificant tripartite ‘nauplius’ eyes.
However in a few groups these eyes have become larger and more specialized. The most famous of
these is Copilia, studied by Sigmund Exner in the 1890s. Here two elements of the nauplius eye have
become large and separated, and have each developed an optical system consisting of a large lens
situated anteriorly in the body which throws an image onto a smaller second lens some distance
behind the first, rather as in a pair of binoculars. Immediately behind the second lens is a small
group of three to seven receptors. The second lens and receptors move laterally in the body, through
about 15° with respect to the first lens, thereby scanning its image. These spontaneous scans are
in opposite directions in the two eyes and occur spontaneously at rates of 0.5–5 Hz. They can also
be evoked by moving a stripe pattern in front of the animal (Downing, 1972). Female Copilia
are predatory (only females have scanning eyes) so presumably this arrangement facilitates prey
capture. However, as the field of view of the retina is only about 3° wide, extended by the scans of the
two eyes to a line perhaps 30° long, the amount of sea scrutinized is still very small. One suggestion
is that Copilia feeds during periods of planktonic vertical migration, and this vertical motion of
potential prey through Copilia’s scan line effectively provides a second dimension to its field
of view.
Another copepod with a scanning system is Labidocera. It is a member of a family, the Pontellidae,
in which several species have well-developed visual systems (see Land and Nilsson, 2002). In
Labidocera the males have an enlarged pair of upward-pointing eyes, with spherical lenses, which cast
images onto a pair of conjoined retinae. Each retina consists of five vertically oriented flat receptors
which together make a line subtending 40° laterally by less than 5° anteroposteriorly. The line of
receptors is moved by a pair of muscles which pull the eye cups backwards, and by elastic ligaments
that pull them forwards (Fig. 1.2). The maximum excursion is about 40°, meaning that a scan causes
the receptor line to scan through a 40° by 40° square of the sea above the animal at a speed of about
220°s−1 (Land, 1988). Females have much reduced eyes, which leads one to suspect that the male’s
eyes are used in mate finding. Both sexes are elongated and deep blue in colour, which suggests that
they might make detectable targets for such a scanning mechanism.
01-Liversedge-01 (Part I).indd 8
7/11/2011 9:56:50 AM
Oculomotor behaviour in vertebrates and invertebrates · 9
Stomatopods
The mantis shrimps are an order of large malacostracan crustaceans that mainly inhabit coral reefs.
They are fearsome predators with a unique visual system. Each of the two mobile compound eyes
has three parts, an upper and lower hemisphere of fairly conventional apposition construction, separated by a narrow mid-band which, in most genera, has six rows of ommatidia. Four of these rows
subserve colour vision, with a system involving a total of 12 visual pigments arranged in three tiers of
receptors in each row. The other two rows are responsible for polarization vision (Cronin and
Marshall, 2004). The field of view of the mid-band is about 180° by 5°, and so represents a linear strip
embedded in a normal two-dimensional eye. Not surprisingly the eye movements, which are always
independent, are quite remarkable. The eyes make fast saccades to target new objects, they can track
objects saccadically, and show typical slow optokinetic behaviour in a rotating drum. In addition they
make low amplitude (∼12°) slow (∼40°s–1) rotations that usually also involve some torsion (Land
et al., 1990). These movements are mostly at right angles to the mid-band, and are interpreted as scanning movements that allow the mid-band to analyse objects for their colour and polarization content
(Fig. 1.2). It is worth noting that reef-living mantis shrimps are themselves highly coloured, with
particular body regions that reflect both linear and circularly polarized light (Chiou et al., 2008).
Decapod crustaceans
The shrimps, lobsters, crayfish, and crabs all have mobile eyes, and show strong optokinetic responses,
although none either target or track objects with the eyes alone. Crabs have been best studied, and
their saccade and fixate behaviour is remarkably similar to that of lower vertebrates (Fig. 1.1). The
compound eyes are fixed on eyestalks, which are supplied with muscles that can rotate them about
all three axes (yaw, pitch, and tilt) with respect to the body carapace. When walking, the eyes are
stabilized about all of these axes. A crab can change direction either by rotating, or by changing the
direction of translation; crabs typically walk sideways, although not exclusively (Paul et al., 1990).
Rotation of gaze is prevented by the OKR, supplemented by information from the statocysts and
leg proprioceptors. From time to time saccades in the opposite direction interrupt these stabilizing
rotations, just as they do in fish. There is little or no compensation for translational changes. The
fiddler crabs (Uca) keep their eyes exactly aligned with the horizon, using a combination of a statocyst
response to gravity and the sight of the horizon itself. (Uca eyes have a region of high vertical acuity
corresponding to a strip around the horizon—effectively a visual streak—but no corresponding
specialization in the horizontal plane). This behaviour is important because these crabs determine if
a moving object is likely to be a predator by whether or not it intersects the horizon. If it does, then it
must be bigger than the crab; if it does not, then it is probably another crab (Zeil and Hemmi, 2006).
Insects
Insects have eyes that are a fixed part of the head, and so their eye movements are essentially neck movements, much as in birds. In some flying insects, notably hoverflies, the whole body can function as the
vehicle for redirecting gaze and in that case some flight manoeuvres can be thought of as eye movements.
Mantids
Praying mantids have long attracted attention because of their creepily familiar way of peering attentively
at other animate objects. Mantids have a forward-pointing region of high acuity in each eye in which the
inter-ommatidial angle is about 0.6°, as opposed to 2° in the periphery. Moving targets seen in the
periphery are brought to this acute zone (analogous to a fovea) with saccadic neck movements that can
reach 560°s–1.
Rossel (1980) filmed the tracking behaviour evoked by moving a live cockroach across the
mantid’s field of view. Against a plain background tracking was smooth, with an overall gain of 0.95
when the target was in the 20° zone around the fixation centre, but fell to zero at 30° from the centre.
01-Liversedge-01 (Part I).indd 9
7/11/2011 9:56:50 AM
10 · Michael F. Land
(a)
60°
0°
Mantis
(b)
90°
30°
1s
Target
Mantis
Plain
background
0°
Target
90°
60°
30°
0°
Textured
background
Fig. 1.3 a) Praying mantis tracking a stimulus (live cockroach) smoothly against a plain
background, and saccadically against a textured background. With kind permission from Springer
Science+Business Media: Journal of Comparative Physiology A, Foveal fixation and tracking in the
praying mantis, 139(4), 1980, Samuel Rossel. b) Young mantis ‘peering’ before jumping. Note that
the head does not rotate in space as it is moved laterally by the body. With kind permission from
Springer Science+Business Media: Journal of Comparative Physiology A, Motion and vision: why
animals move their eyes, 185(4), 1999, M.F. Land.
Interestingly, against a textured background tracking consisted entirely of 10–20° saccades, evoked
when the target moved outside the fixation zone (Fig 1.3a). In normal settings tracking is exclusively
saccadic. It seems that smooth tracking is normally prevented by the powerful optokinetic response,
and that the saccadic mechanism overcomes this. However, the velocity input responsible for smooth
tracking does show up during saccadic tracking, as these saccades do have a velocity as well as a position component.
Mantids and locusts have another interesting oculomotor behaviour. When about to jump from
one stem to another they precede the jump by ‘peering’ (Fig. 1.3b). The whole body rotates so that
the head is swung from side to side by up to a centimetre. The head, however, does not rotate,
because retinal slip detected in the lateral parts of the eye causes the neck to counter-rotate the
head (Collett, 1978). This means that in the forward direction the eyes see a pure lateral translational
flow-field, allowing unambiguous measurement of distance.
Dragonflies
Dragonflies have a pronounced high resolution crescent (fovea) across the top of their compound eyes.
This region points upwards and forwards, and is used to detect insect prey passing overhead. Some dragonflies catch prey while cruising, others detect prey from a perched position from which they dart up to
make the capture from below. Eurythemis simplicicollis, a percher, captures prey by taking an interception
course, aiming ahead of the prey’s current position. The head first targets the prey with a neck saccade
which centres it on the middle of the fovea. Then as the dragonfly takes off the fovea maintains fixation
while the body is manoeuvred into the interception course, which in general will not be in line with the
head. Capture flights are nearly always successful and rarely last longer than 200ms (Olberg et al., 2007).
Cruising dragonflies also use a flight manoeuvre—motion camouflage—which makes them appear
stationary on the retina of their prey, or a conspecific during a territorial confrontation, even though
they are engaged in active pursuit. This involves moving in such a way that as the prey moves forward
the bearing of the predator remains constant. In the simplest case this could involve flying at the same
speed on a parallel track, although there are other more complex strategies (Mizutani et al., 2003).
01-Liversedge-01 (Part I).indd 10
7/11/2011 9:56:50 AM
Oculomotor behaviour in vertebrates and invertebrates · 11
Blowflies
When making turns in flight, blowflies and houseflies make rapid saccade-like turns of the head and
body at rates of up to 10 per second. Using a remarkable system of two search coils, Schilstra and van
Hateren (1998) managed to record both thorax and head movements during flight around all three
axes of rotation. Their main findings were that both thorax and body make fast turns, but the head
movement is augmented by neck movements and so moves faster. A 25° turn takes the thorax about
45ms to complete, but the head-in-space (i.e. gaze) change is made in about 20ms. When the thorax
starts to yaw the head first counter-rotates, then moves rapidly in the same direction as the thorax, then
counter-rotates again at the end of the turn (Fig. 1.1). The strategy is very similar to the eye-in-head
saccades of vertebrates, and the function is the same: to keep motion blur to a minimum. The yaw
saccades are accompanied by roll of the thorax of 10° or more, which is counteracted by the neck so
that the amount of roll experienced by the head is only 1–2°. Changes in pitch are minimal. The
principal mechanism involved in stabilizing gaze is the haltere system. These are gyroscopic sensors
on the thorax that are functionally analogous to the vertebrate semicircular canals, although anatomically homologous to the hind wings of other insects. Male flies also chase conspecifics in flight, so
have a visual pursuit mechanism, but the relative movements of head and body during such chases
have not been investigated.
Hoverflies
These are perhaps the most agile of all insects, with an astonishing ability to control flight direction
and speed. The larger species can often be seen in sunlit woodland making sudden interception
flights from stationary hovering positions in mid-air. The small hoverfly Syritta pipiens is sexually
dimorphic, with the male possessing a region of enlarged facets and increased acuity (∼0.6°) in the
frontodorsal region of each eyes; in the rest of the male eye, and the whole of the female eye, interommatidial angles are about 1.5°. Males do not chase females, but shadow them at a distance of
about 10 cm, until they land on a flower, at which point the males pounce. At this distance the male
is invisible to the female by virtue of her lower resolution. The female’s flight consists of saccadic
turns of the whole body, with periods of straight flight between them. Unlike blowflies, the neck does
not rotate the head separately, at least in the yaw plane. The males have a similar mode of flight,
but when they see a female they track her by rotating the body smoothly, keeping her image within
about 5° of the body axis, and maintaining a nearly constant distance at the same time (Fig. 1.4).
If the female moves fast they can also track her with a series of saccades. Thus the flight path of the
male can be regarded as both a series of flight manoeuvres and as eye movements. Think of a primate
eye with wings (Collett and Land, 1975).
Hymenoptera
When bees and wasps leave their nests, or find a new source of food, they typically make a reconnaissance
flight consisting of a series of increasing arcs centred on the nest or feeder. According to Zeil et al.
(2007) they are setting up series of boundary ‘snapshots’ which define a V-shaped flight corridor
leading to the nest, and on their return use these views to locate the centre of this corridor. During
these arcing flights ground-nesting wasps (Cerceris) make a series of head saccades interspersed with
stabilizing movements which, as in blowflies, serve to keep gaze direction temporarily constant (Fig. 1.4).
The saccades are always made towards the nest, and tend to keep its location on the retina in a roughly
constant location to one side or the other of the head axis during each arc.
Water beetle larvae
The larvae of certain water beetles have six single-chambered ocelli on each side of the head. Two
of these are large, tubular, and very unusual. In Thermonectus marmoratus the retinae consist of a
horizontal row of receptors organized in two tiers, somewhat reminiscent of the principal eyes of
01-Liversedge-01 (Part I).indd 11
7/11/2011 9:56:50 AM
12 · Michael F. Land
Syritta
8
1
2
6
3
Female
1
Male
7
5
4
9
8
7
6
5
2
5 cm
4
3
Cerceris
Nest
1
0
0.5
2
1.5
5 cm
2.5
Fig. 1.4 Syritta is a small hoverfly. The figure shows a single video record of the body axes of a
pair of flies, seen from above. The flight of the female is essentially saccadic, with long periods in
which the body orientation does not change, punctuated by rapid turns. The male, who is
shadowing her at a constant distance of about 10 cm, rotates smoothly, maintaining her image
within 5° of his body axis. Corresponding times are numbered every 0.4 s. With kind permission
from Springer Science+Business Media: Journal of Comparative Physiology A, Visual control of flight
behaviour in the hoverfly Syritta pipiens L., 99(1), 1975, T.S. Collett. Cerceris is a small groundnesting wasp. The record shows that it makes a series of expanding arcs as it backs away from the
nest. The lines attached to the head show the orientation of the head axis, and indicate that, like
the hoverfly, the head direction changes saccadically (arrows), with intervening periods of flight in
which there is no change in orientation. Times in seconds. Redrawn and modified with kind
permission from Cold Spring Harbor Laboratory Press: Inverterbrate Neurobiology, p. 389 figure 3b,
2007, North and Greenspan.
jumping spiders. These forward-pointing eyes have horizontal fields of view 30° and 50° wide, but
the vertical field is only a few degrees high. The eyes themselves are not mobile, but during the
approach to prey the neck and thorax move the head and eyes up and down through up to 50°,
thereby greatly extending the fields of view of the linear retinae (Buschbeck et al., 2007).
01-Liversedge-01 (Part I).indd 12
7/11/2011 9:56:50 AM
Oculomotor behaviour in vertebrates and invertebrates · 13
Molluscs
Cephalopods
The cuttlefish, octopuses, and squid all have large lens eyes which show remarkable evolutionary
convergence with those of fish. They also have a very similar range of eye movements. Collewijn (1970)
used a search-coil technique to measure optokinetic nystagmus in the cuttlefish Sepia officinalis, both
when restrained in a rotating striped drum and when swimming freely. Movement of the stripes at
speeds between 0.035 and 35°s–1 evoked a typical nystagmus, with fast and slow phases of about 10°
amplitude. The fast phases were slower than in vertebrates, with a maximum speed of about 60°s–1.
When rotated in the dark a transient nystagmus was also seen indicating that the statocysts also
contribute to eye stabilization. This stabilization is not perfect: the maximum gain during the slow
phases of nystagmus was rarely more than 0.5. Free swimming cuttlefish show a similar pattern of eye
movement to fish, making saccades into each turn and stabilizing, at least partially, during its progress
(Fig. 1.1).
Sepia and Octopus also fixate their prey (prawns and crabs) before they strike. The presence of prey
in the field of view of one eye evokes a body turn towards it, and an eye movement that tends to bring
the prey’s image towards the posterior region of the retina. This typically involves a forward shift of
the optic axis by about 27.5°, from 80° lateral to the body axis down to 52.5°. The contralateral eye
also rotates forward, resulting in the prey being seen binocularly. In Octopus attacks are predominantly monocular (Hanlon and Messenger, 1996).
Heteropod gastropods
Most gastropod snails have eyes of some sort, and they move their heads from side to side as they
move forward, so in some sense they can be said to have eye movements. However, there is no
obvious organization to these movements, and in general the eyes are not independently mobile.
There is, however, one superfamily, the Heteropoda, whose members do have well-developed vision.
These marine snails are active predators in the plankton, and have large paired eyes with long narrow
retinae, which are often tiered. Oxygyrus has a retina 400 receptors long and three receptors wide.
The eyes scan an arc of about 90° at right angles to the long axis of the retina, through the dark quadrant beneath the snail (Fig. 1.2), and the assumption is that they are looking for planktonic prey
illuminated by the light from above (Land, 1982).
Conclusions and future directions
The ubiquity of the saccade and fixate strategy, in animals from widely separated evolutionary backgrounds (Fig. 1.1), argues strongly that this behaviour solves a problem common to all animals with
good vision. This problem stems from the fact that photoreceptors are slow: it takes a mammalian
cone about 20ms to respond fully to a change in brightness, and about half this for a fly photoreceptor.
This means that motion blur—the loss of high spatial frequency information—is an inevitable consequence of an unstabilized image. As a general rule, blur starts to set in if the rate of image movement
exceeds one receptor acceptance angle per receptor response time. For humans this is about one
degree per second. The better the resolution of an eye (i.e. the smaller the receptor acceptance angle)
the greater the need for precise stabilization. Thus far, Walls’ (1962) remark, quoted earlier, that the
main function of eye movements is to stabilize gaze with respect to the surroundings, seems to hold
across much of the animal kingdom. It seems, however, that avoidance of motion blur is not the only
reason for image stabilization. Hoverflies and crabs stabilize gaze with much more precision than blur
alone requires. Other possibilities are, first, the prevention of rotation during locomotion so that the
translational flow field can be used for distance judgements (Fig. 1.4), and, second, the need for a
stationary background image to allow the better detection of other moving objects (Land, 1999).
In a mobile animal an image fixed to the surroundings cannot be maintained for long, because
when the animal turns the eyes would hit their back stops, and so there is a need for movements that
01-Liversedge-01 (Part I).indd 13
7/11/2011 9:56:50 AM
14 · Michael F. Land
return the eyes to a central position. To avoid loss of vision these must take up as little time as possible, hence saccadic eye movements. In some groups, from praying mantids to primates, saccades
have taken on the additional role of centring specialized high-resolution retinal regions onto parts of
the image requiring detailed scrutiny.
As we have seen there are also a number of animals that do not use a saccade and fixate strategy,
but instead make scanning movements in which the retina is moved systematically and continuously
across the image of the surroundings. These include the animals in Fig. 1.2, and the larvae of some
water beetles. All these animals have essentially one-dimensional retinae, and the scanning strategy
is a way of providing a two-dimensional field of view. (Copilia, with an almost point-like field
of view, is an even more extreme case of dimensional reduction.) At first sight it would seem that
scanning eyes would compromise their ability to detect detail by violating the blur rule mentioned
above. This is not the case. In all the known examples the speed of scanning is at or below the ‘one
acceptance angle per response time’ limit (Land, 1999). Thus scanning with a one-dimensional retina
is a viable (and parsimonious) alternative to more conventional two-dimensional viewing.
While we now have a fairly good idea of the range of eye movement types across the animal kingdom,
we have much less knowledge of how sequences of eye movements are used strategically to obtain the
information an animal needs to pursue its behavioural goals. Human eye movement studies have
made some advances in this direction (see Land and Tatler, 2009), and from some insect studies it
is becoming clear that extended sequences of gaze movements are also important, for example, in
obtaining landmark patterns for homing (Cerceris, Fig. 1.4). Future developments in the field are
likely to lie in this direction.
References
Buschbeck, E., Sbita, S., and Morgan, R. (2007). Scanning behavior by larvae of the predaceous diving beetle Thermonectus
marmoratus (Coleoptera: Dytiscidae) enlarges visual field prior to prey capture. Journal of Comparative Physiology A,
193, 973–982.
Chiou, T-H., Kleinlogel, S., Cronin, T., Caldwell, R., Siddiqi, A., Goldizen, A., and Marshall, J. (2009). Circular polarization
vision in a stomatopod crustacean. Current Biology, 18, 429–434.
Collett, T.S. (1978). Peering: a locust behaviour pattern for obtaining motion parallax information. Journal of Experimental
Biology, 76, 237–241.
Collett, T.S. and Land, M.F. (1975). Visual control of flight behaviour in the hoverfly Syritta pipiens L. Journal of
Comparative Physiology, 99, 1–66.
Collewijn, H. (1970). Oculomotor reactions in the cuttlefish Sepia officinalis. Journal of Experimental Biology, 52, 369–384.
Consi, T.R., Passani, M.B., and Macagno, E.R. (1990). Eye movements in Daphnia magna. Regions of the eye are
specialized for different behaviors. Journal of Comparative Physiology A, 166, 411–420.
Cronin, T.S. and Marshall, J. (2004). The unique world of mantis shrimps. In F.R. Prete (ed.) Complex worlds from simpler
nervous systems (pp. 239–269). Cambridge MA: MIT Press.
Downing, A.C. (1972). Optical scanning in the lateral eyes of the copepod Copilia. Perception, 1, 247–261.
Easter, S.S., Johns, P.R., and Heckenlively, D. (1974). Horizontal compensatory eye movements in goldfish (Carrassius
auratus). I. The normal animal. Journal of Comparative Physiology, 92, 23–35.(12), e3956. doi:10.1371.
Hanlon, R.T. and Messenger, J.B. (1996). Cephalopod behaviour. Cambridge: Cambridge University Press.
Harkness, L. (1977). Chameleons use accommodation cues to judge distance. Nature, 267, 346–349.
Land, M.F. (1969). Movements of the retinae of jumping spiders in response to visual stimuli. Journal of Experimental
Biology, 51, 471–483.
Land, M.F. (1982). Scanning eye movements in a heteropod mollusc. Journal of Experimental Biology, 96, 427–430.
Land, M.F. (1988). The functions of eye and body movements in Labidocera and other copepods. Journal of Experimental
Biology, 140, 381–391.
Land, M.F. (1999). Motion and vision: why animal move their eyes. Journal of Comparative Physiology A, 185, 341–352.
Land, M.F. and Nilsson, D-E. (2002). Animal eyes. Oxford: Oxford University Press.
Land, M.F. and Tatler, B.W. (2009). Looking and acting: Vision and eye movements in natural behaviour. Oxford: Oxford
University Press.
Land, M.F, Marshall, N.J., Brownless, D., and Cronin, T. (1990). The eye movements of the mantis shrimp Odontodactylus
scyllarus (Crustacea: Stomatopoda). Journal of Comparative Physiology A, 167, 155–166.
Mizutani, A., Chahl, J.S., and Srinivasan, M.V. (2003). Motion camouflage in dragonflies. Nature, 423, 604.
01-Liversedge-01 (Part I).indd 14
7/11/2011 9:56:50 AM
Oculomotor behaviour in vertebrates and invertebrates · 15
Olberg, R.M., Seaman, R.C., Coats, M.I., and Henry, A.F. (2007). Eye movements and target fixation during dragonfly
prey-interception flights. Journal of Comparative Physiology A, 193, 685–693.
Ott, M., Schaeffel, F., and Kirmse, W. (1998). Binocular vision and accommodation in prey-catching chameleons. Journal
of Comparative Physiology A, 182, 319–330.
Paul, H., Nalbach, H-O., and Varjú, D. (1990). Eye movements in the rock crab Pachygrapsus marmoratus walking along
straight and curved paths. Journal of Experimental Biology, 154, 81–97.
Rossel, S. (1980) Foveal fixation and tracking in the praying mantis. Journal of Comparative Physiology A, 139, 307–331.
Schilstra, C. and van Hateran, J.H. (1998). Stabilizing gaze in flying blowflies. Nature, 395, 654.
Wallman, J. and Letelier, J-C. (1993). Eye movements, head movements and gaze stabilization in birds. In Zeigler, H.P.
and Bischof, H-J. (eds.) Vision, brain and behavior in birds (pp. 245–263). Cambridge, MA: MIT Press.
Walls, G.L. (1942) The vertebrate eye and its adaptive radiation. Bloomington Hills, MI: Cranbrook Institute. Reprinted
(1967) New York: Hafner.
Walls, G.L. (1962). The evolutionary history of eye movements. Vision Research, 2, 69–80.
Zeil, J. and Hemmi, J.M. (2006). The visual ecology of fiddler crabs. Journal of Comparative Physiology A, 192, 1–25.
Zeil, J., Boeddeker, N., Hemmi, J.M., and Stürzl, W. (2007). Going wild: towards an ecology of visual information
processing. In North, G. and Greenspan, R.J. (eds.) Invertebrate neurobiology (pp. 381–403). New York: Cold Spring
Harbor Laboratory Press.
01-Liversedge-01 (Part I).indd 15
7/11/2011 9:56:51 AM
This page intentionally left blank
00-Liversedge-FM.indd xx
7/21/2011 10:14:03 AM
CHAPTER 2
Origins and applications
of eye movement
research
Nicholas J. Wade and Benjamin W. Tatler
Abstract.
Introduction
The widespread modern application of indices of oculomotor behaviour might suggest a long historical
concern with them, but such is not the case. Even descriptions of eye movement patterns, let alone
their measurement, reflect relatively recent developments rather than long-standing interests. The aim
of this chapter is to trace the emergence of scientific interest in and the experimental measurement
of eye movements and to survey the areas in which they were and are applied.
In examining the origins of eye movement research it is instructive to place them within a general
framework of discovery and development of naturally occurring phenomena. Such a framework
illuminates the phases through which the studies of phenomena pass. The first phase is a description
of the phenomenon, which is followed by attempts to measure it in some way. Measurement enables
the characteristics of the phenomenon to be determined and incorporated within extant theory.
Finally, the phenomenon is accepted and utilized to gain more insights into wider aspects of behaviour (see Table 2.1). In many cases, the phenomena have been described in antiquity, and no clear
origin can be determined. In others, there is an obvious break with the past and a phenomenon is
described and investigated for the first time. Both of these features apply to the unfolding understanding of eye movements.
Not all aspects of behaviour that are universal, like eye movements, have warranted description unless
their normal functions are disrupted by disease or accident. This has certainly been the case for oculomotor behaviour. Moreover, description does not occur in a vacuum; it requires some context, be it
philosophical, medical, or psychological. Scientific descriptions are a modern preoccupation, and so the
02-Liversedge-02.indd 17
7/12/2011 3:19:41 PM
18 · Nicholas J. Wade and Benjamin W. Tatler
Table 2.1 A natural history of perceptual phenomena indicating the phases through which their
investigation can pass
Origins
Description
Phenomenology
Confirmation
Refining observations
Measurement
Applying extant technology then developing new techniques
Applications
Interpretation
Within existing psychological concepts
Integration
Links with underlying physiology
Exploitation
Phenomena used to test particular theoretical ideas
search for the origins of accounts of eye movements must delve into pre-scientific periods. We will examine this heritage under the headings given in Table 2.1. The first phase of understanding any phenomenon
is an adequate description of it. This can occur independently of theory, but the phenomena are rarely
free from the psychological spirit of the times. The latter often intrude on the phenomenology so that a
clear description remains difficult to extract. Not only do eye movements have many components but
some of these are also dynamic, and difficult to detect without the aid of specially devised instruments.
However, there needs first to be a phenomenon for which the instruments can assist in recording. The
history of eye movement research provides a delightful discourse between description and measurement,
between the subjective reports of effects and their objective measurement. In addition, the study of eye
movements has, perforce, addressed the integration of vision and action. This integration is not restricted
to the functions served by eye movements, but the clues to the patterns of eye movements usually derived
from examining visual phenomena (see Tatler and Wade, 2003; Wade, 2007a, 2010; Wade and Tatler, 2005;
Wade et al., 2003). Moreover, the descriptions of oculomotor behaviour are based on the sense that is
observing its own actions: eyes observing eye movements. One of the intriguing aspects of the history we
are about to examine is that subjective impressions of their motor aspects were often taken as evidence of
their occurrence rather than observing the eyes of others. Feeling if the eyes had moved in the orbit was used
as an index of whether the eyes had moved; this becomes particularly significant in the context of vertigo.
Origins
The eyes move in order to locate objects of interest in the region of greatest visual resolution—the
fovea. They are moved by muscles attached to the eyeball and the socket. Knowledge about the range
of movements was available to early anatomists, like Galen in the second century:
Since there are four movements of the eyes, one directing them in toward the nose, another out toward the
small corner, one raising them up toward the brows, and another drawing them down toward the cheeks, it
was reasonable that the same number of muscles should be formed to control the movements. . . . Since it
was better that the eye should also rotate, Nature made two other muscles [obliqui, superior and inferior]
that are placed obliquely, one at each eyelid, extending from above and below toward the small corner of the
eye, so that by means of them we turn and roll our eyes just as readily in every direction.
(May, 1968, p. 483.)
Thus, the three axes around which the eye can rotate were clearly described, but the emphasis was on
the direction the eye would take following a rotation rather than the rotations themselves.
Description
Initially, phenomena are described in a general way, often incorporating elements of the putative
cause. The phenomenology is thereafter refined, and perhaps subdivisions of the phenomena
02-Liversedge-02.indd 18
7/12/2011 3:19:41 PM
Origins and applications of eye movement research · 19
are introduced. Eye movements present particular challenges in this regard because of their dynamic
component. Descriptions of gaze direction preceded those of movement. Many of the phrases in our
language concerning the eyes reflect a greater appreciation of their static than their dynamic aspects.
Indeed, the directions of the eyes are potent cues in social intercourse—we use terms like seeing ‘eye
to eye’ with someone, honesty is determined by ‘looking someone straight in the eye’, or doubt is cast
on someone’s integrity if they have a ‘shifty look’.
Amongst the oldest descriptions of eye directions are those related to binocular alignment. We are
exceedingly sensitive to detecting deviations in the eyes of those who have squints. It is the binocular
features of eye movements that were amongst the first aspects that were subjected to scrutiny. In large
part this was because a consequence of squinting is the disruption of binocular single vision, and this
has been an overarching issue in the study of vision (see Wade, 1998). On the other hand, throughout
the long descriptive history of studies of eye movements a vital characteristic of them remained
hidden from view, both in ourselves and in our observations of others. The rapid discontinuous
nature of eye movements is a relatively recent discovery, as are the small involuntary movements that
accompany fixation. For most of recorded history, the eyes were thought to glide over scenes to alight
on objects of interest, which they would fix with unmoved accuracy.
Binocular eye movements
One of the most distinctive features of eye movements is also related to their binocularity—they tend
to move together. Such conjoint motion is not always in the same direction, as was noted by Aristotle
over 2000 years ago (Ross, 1927). He distinguished between version (movements in the same direction) and convergence eye movements. He also commented that there were certain movements that
were not possible, namely, divergence beyond the straight ahead, and movements of one eye upward
and the other downward. A similar description was given by Ptolemy several centuries later, but he
did appreciate the function eye movements served: they resulted in corresponding visual pyramids
for the two eyes (A.M. Smith, 1996). Aristotle believed that there was a single source of control for
the movements of both eyes, although he did not make such a clear statement as the eleventh-century
scholar, Ibn al-Haytham (or Alhazen): ‘When both eyes are observed as they perceive visible objects,
and their actions and movements are examined, their respective actions and movements will be
found to be always identical’ (Sabra, 1989. p. 229).
This later became called the law of equal innervation (see Howard, 2002). Thus, Ibn al-Haytham
proposed that the two eyes acted together in order to achieve single vision, and that the movements
of the eyes were coordinated to this end. It appears that he supported the proposal by observing the
eyes of another person while they were inspecting an object. Robert Smith (1738, 2004) introduced
a simple demonstration of corresponding eye movements—the closed eye follows the movements of
the open one, as can be ascertained by lightly placing a finger over the closed lid. He elaborated on
the analysis of binocular single vision by recourse to the stimulation of corresponding points in the
two eyes, and he related the misalignment of the two eyes to double vision. Smith integrated several
strands of interest in binocular vision: singleness of vision with two eyes was considered to be a
consequence of stimulating corresponding points on the two retinas; distinct vision was restricted to
the central region of the retina; both eyes moved in unison to retain correspondence; this could be
demonstrated by feeling the movements of the closed eye; double vision occurs when one eye is
moved out of alignment with the other. Such squinting can be induced artificially (by displacing one
open eye with the finger) and it also occurs naturally.
Ophthalmology has a longer recorded history than optics: several surviving papyri dating from
the second millennium B.C. describe disorders of the eye and treatments of them. For example, the
Ebers papyrus described dimness of sight and strabismus (see Bryan, 1930). A millennium later,
there were specialists in diseases of the eye practising in Egypt. An illustration of a cataract operation
of the type that was probably performed almost 2000 years ago was redrawn by Thorwald (1962), and
written records indicate that such operations had been conducted a thousand years earlier (see
Magnus, 1901). Greek medicine profited greatly from these earlier endeavours, and added to them.
02-Liversedge-02.indd 19
7/12/2011 3:19:41 PM
20 · Nicholas J. Wade and Benjamin W. Tatler
Neither Egyptian nor Greek ophthalmology was free from the mystical and metaphysical, and observation was frequently subservient to philosophical doctrine.
Strabismus, squint, or distortion of the eyes, was recorded in antiquity, but its reported association
with problems of binocular vision are more recent (see Duke-Elder and Wybar, 1973; Hirschberg,
1899; van Noorden, 1996; Shastid, 1917). The eye specialists in Babylonia, Mesopotamia, and Egypt
must have had a working knowledge of ocular anatomy in order to carry out the operations they are
known to have performed. However, the records that have survived (such as the Ebers papyrus)
relate mainly to the fees they charged and the penalties they suffered for faulty operations rather
than the conditions they cured. Their skills and understanding would have been passed on to Greek
physicians, who both developed and recorded them. Many Greek texts, through their translations,
have been transmitted to us, but any illustrations that they might have included have not survived.
Aristotle described many features of perception, including strabismus and the ensuing diplopia:
‘Why do objects appear double to those whose eyes are distorted? Is it because the movement does
not reach the same point on each of the eyes?’ (Ross, 1927, p. 958b). This reflected Aristotle’s belief
that the eyes operated as a unit rather than independently.
In addition to his description of the extraocular muscles, Galen also recorded that deviations of the
eyes in strabismus were always nasal or temporal. However, he did state that those with strabismus
rarely make errors in object recognition. Corrections for the deviation of an eye were advocated by
many, like Paulus Ægineta in the seventh century. He recommended the use of a mask for children,
which had small apertures so that the eyes would need to align themselves appropriately to see
objects. A similar scheme was introduced and illustrated by Ambroise Paré in the sixteenth century.
More elaborate masks were prescribed by Georg Bartisch (1583). They indicate that he was aware of
convergent and divergent squint and he recognized that strabismus was more amenable to correction in children than in adults. From the eighteenth century attention was directed to the manner in
which those with strabismus saw objects singly. There were many possible causes of squint, most of
which were enumerated by Porterfield (1759). The overriding historical concern with squinting
has been with its medical treatment, rather than its relevance to theories of vision or eye movements.
It was in this medical context that Erasmus Darwin (1778) investigated squinting in a young child
who had alternating dominance.
Porterfield is an important figure in the early descriptive phase of research on eye movements
(see Wade, 2000, 2007b). Prior to publication of his influential Treatise on the eye (Porterfield,
1759), he wrote two long articles in 1737 and 1738; one was on external and the other was on internal motions of the eye. In the course of the latter, Porterfield coined the term accommodation
for the changes in focus of the eye for different distances. He also examined an aphakic patient, in
whom the lens in one eye had been removed, and demonstrated that the lens is involved in accommodation. However, it is his analysis of eye movements during scanning a scene and reading that
are of greater interest here. Porterfield’s studies started, as had earlier ones, from an understanding
that distinct vision was limited to a small region of the visual field—around the point of fixation.
This did not correspond with our visual experience of the scene, and he described this paradox
eloquently:
In viewing any large Body, we are ready to imagine that we see at the same Time all its Parts equally distinct
and clear: But this is a vulgar Error, and we are led into it from the quick and almost continual Motion of
the Eye, whereby it is successively directed towards all the Parts of the Object in an Instant of Time.
(Porterfield, 1737, pp. 185–186.)
Porterfield applied this understanding of the requirement for rapid eye movements to reading
itself, although his analysis was logical rather than psychological:
Thus in viewing any Word, such as MEDICINE, if the Eye be directed to the first Letter M, and keep itself
fixed thereon for observing it accurately, the other Letters will not appear clear or distinct . . . Hence it is that
to view any Object, and thence to receive the strongest and most lively Impressions, it is always necessary
02-Liversedge-02.indd 20
7/12/2011 3:19:42 PM
Origins and applications of eye movement research · 21
we turn our Eyes directly towards it, that its Picture may fall precisely upon this most delicate and sensible
Part of the Organ, which is naturally in the Axis of the Eye.
(Porterfield, 1737, pp. 184–185, original capitals and italics.)
Porterfield did not provide empirical support for the ideas he developed. He also appreciated the
historical background in which his researchers were placed. Moreover, he applied his understanding
of eye movements to a wide range of phenomena, including visual vertigo. It was from vertigo that
the first signs of discontinuous eye movements derived: the fast and slow phases of nystagmus were
demonstrated with the aid of afterimages.
Confirming phenomenology
Visual motion of the world following body rotation was described in antiquity. Perhaps the fullest
accounts were given by Theophrastus (see Sharples, 2003); he described the conditions that can
induce dizziness, including rotation of the body, but he did not relate these to movements of the eyes.
Porterfield (1759) did add an eye movement dimension to it but he denied their existence following
rotation—because he was not aware of feeling his eyes moving. That is, the index of eye movement
he used was the conscious experience of it. He proposed that the post-rotational visual motion was
an illusion in which the stationary eye is believed to be moving. In modern terminology he was
suggesting that it was the signals for eye movements, rather than the eye movements themselves, that
generated the visual motion following body rotation.
Porterfield’s description stimulated others to examine vertigo and to provide interpretations of it,
some of which involved eye movements. The most systematic studies were carried out by William
Charles Wells in the late eighteenth century. They were reported in his monograph concerned with
binocular single vision (Wells, 1792). The text of the book has been reprinted in Wade (2003a). The
characteristics of eye movements following rotation were clearly described: he formed an afterimage
(which acted as a stabilized image) before rotation so that its apparent motion could be compared to
that of an unstabilized image when rotation ceased. The direction of the consequent slow separation
of the two images and their rapid return (nystagmus) was dependent on the orientation of the head
and the direction of body rotation. In the course of a few pages Wells laid the foundations for understanding both eye movements and visual vertigo (which he referred to as giddiness). Thus, Wells
used afterimages to provide an index of how the eyes move by comparing them with real images. He
confirmed his observations by looking at the eyes of another person who had rotated. By these means
he cast doubt on evidence derived from subjective impressions of how the eyes were moving. In addition
to his clear accounts of dynamic eye movements, Wells (1794a) rejected alternative interpretations
of visual vertigo and even provided evidence of torsional nystagmus (Wells, 1794b).
Jan Evangelista Purkinje (1820, 1825) essentially repeated Wells’s experiments, but was ignorant of
them. Indeed, Purkinje’s experiments were inferior to those by Wells, but both adopted interpretations of visual vertigo in terms of eye movements. Purkinje added a novel method for studying
vertigo and eye movements—galvanic or electrical stimulation of the ears. Stimulating the sense
organs with electricity from a voltaic pile was widely applied in the nineteenth century (see Wade,
2003b). The technique was amplified by Eduard Hitzig (1871). He examined eye and head movements during galvanic stimulation of the ears and likened nystagmus to a fisherman’s float drifting
slowly downstream and then being snatched back. The 1870s was the decade of added interest in eye
movement research because of its assistance in determining semicircular canal function. Postrotational eye movements were measured and related to the hydrodynamic theory, which was
proposed independently by Ernst Mach, Josef Breuer, and Alexander Crum Brown.
Breuer (1874) provided a similar description of post-rotational nystagmus to Wells, but he was
able to relate the pattern of eye movements to the function of the semicircular canals. Breuer argued
that during rotation the eyes lag behind the head in order to maintain a steady retinal image; then
they make rapid jerky motions in the direction of head rotation. The eye movements reduce in
amplitude and can stop with rotation at constant angular velocity. When the body rotation ceases the
02-Liversedge-02.indd 21
7/12/2011 3:19:42 PM
22 · Nicholas J. Wade and Benjamin W. Tatler
b
d
e
c
a
Fig. 3.
Fig. 4.
g
Fig. 5.
Fig. 6.
f
h
Fig. 2.1 Schematic diagrams by Crum Brown (1878) of eye movements during and after body
rotation: ‘When a real rotation of the body takes place the eyes do not at first perfectly follow
the movement of the head. While the head moves uniformly the eyes move by jerks. Thus, in
the diagram, Fig. 3, where the abscissæ indicate time and the ordinate the angle described, the
straight line a b represents the continuous rotatory motion of the head and the dotted line the
continuous motion of the eye. Here it will be seen that the eye looks in a fixed direction for a short
time, represented by one of the horizontal portions of the dotted line a b, and then very quickly
follows the motion of the head, remains fixed for a short time, and so on. After the rotation has
continued for some time the motion of the eye gradually changes to that represented by the dotted
line c d in Fig. 4. The eye now never remains fixed, but moves for a short time more slowly than
the head, then quickly makes up to it, then falls behind, and so on. At last the discontinuity of the
motion of the eye disappears, and the eye and the head move together. If now the rotation of the
head be stopped (of course the body stops also) the discontinuous movements of the eyeballs
recommence. They may now be represented by the dotted line in Fig. 5. The intermittent motion of
the eyes gradually becomes less, passing through a condition such as that shown by the dotted
line in Fig. 6, and at last ceases.’ (Crum Brown, 1878, p. 658).
eyes rotate in the same direction as prior head rotation, and the visual world appears to move in the
opposite direction interspersed with rapid returns. He also stated, like Wells, that there is no visual
awareness during these rapid returns. This is a clear reference to saccadic suppression, although he
did not use the term saccade.
Afterimages were also employed by Mach (1873, 1875), who rediscovered Wells’s method for
examining post-rotational eye movements. In addition to observing an afterimage, he applied the
time-honoured technique of placing a finger by the side of the eye, and also using pressure figures
as stabilized retinal images. However, perhaps the clearest descriptions of eye movements during
and following body rotation were given by Crum Brown (1878), who provided diagrams of the
steady head and jerky eye movements (Fig. 2.1). Wells’s account of the dynamics of eye movements
following rotation was beautifully refined by Crum Brown, although no reference was made
to Wells. Like most other historians of the vestibular system, Crum Brown championed Purkinje
as the founder of experimental research linking eye movements to vestibular stimulation (see Wade,
2003b).
In the early twentieth century, two aspects of eye movements and vertigo attracted attention. The
first was the use of post-rotational nystagmus as a clinical index of vestibular function. These characteristics of nystagmus were defined more precisely by Robert Bárány (1906, 1913), who was
awarded the Nobel Prize in 1914 for his vestibular researches. Indeed, the rotating chair is now called
the Bárány chair. He also refined the method of stimulating the semicircular canals with warm and
cold water so that the eye movements they induce could be easily observed. The second aspect was
the use of post-rotational eye movements as a screening test for aviators.
Aircraft flight placed demands on the vestibular receptors that were beyond the normal range.
Only the human centrifuge had subjected the human frame to similar forces. It had been devised by
Erasmus Darwin as a treatment for insanity (Wade et al., 2005), it was adopted as an instrument for
generating vertigo, and now it was applied to simulating the pressures of aircraft flight. Griffith
(1920) examined eye movements with aircraft pilots following body rotation. Initially aviators
were selected on the basis of their vestibular sensitivity, as determined by tests on a Bárány chair.
02-Liversedge-02.indd 22
7/12/2011 3:19:42 PM
Origins and applications of eye movement research · 23
However, seasoned pilots were not so susceptible to vertigo, and he argued that they had habituated
to the repeated rotations to which they had been exposed. In order to examine habituation more
rigorously, Griffith tested students on repetitive rotations in a modified Bárány chair. They were
exposed to 10 rotations of 20 seconds, alternating in direction, per session and they were tested over
many days. Measures of the duration of apparent motion and the number and duration of nystagmic
eye movements were recorded after the body was stopped:
We have found that, as turning is repeated from day to day, the duration of after-nystagmus, the number of
ocular movements made, and the duration of the apparent movement rapidly decrease. The major part of
this decrease occurs within the first few days. The decrease takes place not only from day to day but also
within a period of trials on any single day.
(Griffith, 1920, p. 46.)
The topic of vestibular habituation attracted Raymond Dodge (1923) and he sought to determine
how the eyes moved during and after rotation. The problem of adaptation to rotation is a general
one, and it is relevant to the relative immunity to motion sickness of those, like seafarers, who are
regularly exposed to the conditions which can induce it. As he remarked: ‘The very existence of
habituation to rotation was vigorously denied during the war by those who were responsible for the
revolving chair tests for prospective aviators’ (Dodge, 1923, p. 15). Dodge had previously measured
eye movements during reading and was noted for the ingenuity of the recording instruments he
made. Recording eye movements during rotation provided a particular challenge (Dodge, 1930).
In examining the possibilities he had noticed that the convexity of the cornea was visible as a moving
bulge beneath the eyelid. Dodge (1921) mounted a mirror over the closed eyelid and was able to
record eye movements by the reflections from it. With it he was able to confirm the results of Griffith:
without the possibility of visual fixation, after-nystagmus declines with repeated rotations. In the
next section it will become evident that Dodge was a pioneer of recording eye movements generally,
and during reading in particular. It is of interest to note that following his developments in these
novel areas he engaged in examining eye movements following body rotation—the problem tackled
by previous pioneers over a century earlier. Indeed, Dodge shared with Purkinje a willingness to
engage in heroic experiments. When Purkinje gained access to a rotating chair he noted the effects of
being rotated for one hour in it. Dodge similarly subjected himself to a gruelling regime: ‘The experiment
consisted of a six-day training period during which the subject (myself) was rotated in the same
direction one hundred and fourteen times each day at as nearly uniform speed as possib | https://pl.b-ok.org/book/2064877/af93fe | CC-MAIN-2020-05 | refinedweb | 16,159 | 51.99 |
Hermes: A Cache For Apollo ClientHermes: A Cache For Apollo Client
An alternative cache implementation for Apollo Client, tuned for the performance of heavy GraphQL payloads.
This is very much a work in progress! It currently meets most of our needs internally, but is not yet a drop-in replacement for Apollo's default in memory cache. See the roadmap to get a sense of the work that's left.
What Makes It Different?What Makes It Different?
This cache maintains an immutable & normalized graph of the values received from your GraphQL server. It enables the cache to return direct references to the cache, in order to satisfy queries1. As a result, reads from the cache require minimal work (and can be optimized to constant time lookups in some cases). The tradeoff is that rather than receiving only the fields selected by a GraphQL query, there may be additional fields.
This is in contrast to the built in cache for Apollo (and Relay), which maintain a normalized map of values. The unfortunate reality of those caches is that read operations impose considerable overhead (in CPU and memory) in order to build a result payload. See the motivation behind this cache, as well as the design exploration for a deeper discussion.
1 If your query contains parameterized fields, there is some work that the cache has to perform during read, in order to layer those fields on top of the static values within the cache.
What Doesn't It Do?What Doesn't It Do?
Hermes is still early days! Some things it doesn't (yet!) support:
Union types: Hermes currently ignores union types and type constraints on fragments. It can just work, but you will likely run into trouble if you are expecting the cache to be able to differentiate stored state based on the node type.
writeData: Hermes doesn't yet implement
writeData.
None of these are things that Hermes can't support; we just haven't had time to build those out yet. If you're interested in contributing, please feel free to hit us up; we'd love to work together to get them figured out!
Using The CacheUsing The Cache
Not too different from Apollo's in memory cache, but configuration is slightly different.
import { ApolloClient } from 'apollo-client'; import { Hermes } from 'apollo-cache-hermes'; const client = new ApolloClient({ cache: new Hermes({ … }), // … });
By default, the cache will consider all nodes with an
id field to be entities (e.g. normalized nodes in the graph).
For now, please refer to the source when looking up configuration values - they're likely to change, and new options to be added.
ContributingContributing
Interested in helping out? Awesome! If you've got an idea or issue, please feel free to file it, and provide as much context as you can.
Local DevelopmentLocal Development
If you're looking to contribute some code, it's pretty snappy to start development on this repository:
git clone cd apollo-cache-hermes yarn # Leave this running while you're working on code — you'll receive immediate # feedback on compile and test results for the files you're touching. yarn dev | https://www.npmjs.com/package/apollo-cache-hermes | CC-MAIN-2022-33 | refinedweb | 524 | 63.09 |
mallinfo - Man Page
obtain memory allocation information
Synopsis
#include <malloc.h> struct mallinfo mallinfo(void); struct mallinfo2 mallinfo2(void);
Description
This field is unused, and is always 0. Historically, it was the "highwater mark" for allocated space—that is, the maximum amount of space that was ever allocated (in bytes); this field).
Versions
The mallinfo2() function was added in glibc 2.33.
Attributes
For an explanation of the terms used in this section, see attributes(7)..
Conforming to
These functions are not specified by POSIX or the C standards. A mallinfo() that is returned by the older mallinfo() function are typed as int. However, because some internal bookkeeping values may be of type long, the reported values may wrap around zero and thus be inaccurate.
Examples
The program below employs mallinfo2() ); }
See Also
mmap(2), malloc(3), malloc_info(3), malloc_stats(3), malloc_trim(3), mallopt(3)
Colophon
This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
Referenced By
malloc_hook(3), malloc_info(3), malloc_stats(3), mallopt(3).
The man page mallinfo2(3) is an alias of mallinfo(3). | https://www.mankier.com/3/mallinfo | CC-MAIN-2021-49 | refinedweb | 201 | 57.37 |
sandbox/Antoonvh/koch.c
The Fractal Dimension of the Koch Snowflake
This page presents an investigation on the Koch snowflake. You may want to watch this youtube video for a detailed, 20 minute introduction to the topic of fractal curves. Since we have the tools, let us use an adaptive quad-tree grid and the ppm2gif converter.
#include "grid/quadtree.h" #include "utils.h" int main(){
The Koch Curve
First, we need to obtain the snowflake curve. We determine it upto 8 fractal levels of refinement, resulting in points on the curve. This determines the maximum level of refinement that we can use in our grid based analysis.
int jmax=9; int n=4; int nm=((n-1)*pow(4,jmax-1))+1; double xn[nm],yn[nm]; double xk[nm]; double yk[nm]; xk[0]=0; xk[1]=1; xk[2]=0.5; xk[3]=0; yk[0]=0; yk[1]=0; yk[2]=-sqrt(3.)/2; yk[3]=0; fprintf(ferr,"\nyk[n-1]=%g\n\n",yk[n-1]); for (int j=1;j<jmax;j++){//For every fractal level for (int k=0;k<(n-1);k++){//For every existing point add three new points. xn[(k*4)]=xk[(k)]; xn[(k*4)+1]=(2*xk[k]+xk[k+1])/3; xn[(k*4)+2]=((xk[k]+xk[k+1])/2)+((yk[k]-yk[k+1])*sqrt(3.)/6); xn[(k*4)+3]=(xk[k]+2*xk[k+1])/3; yn[(k*4)]=yk[k]; yn[(k*4)+1]=(2*yk[k]+yk[k+1])/3; yn[(k*4)+2]=((yk[k]+yk[k+1])/2)-((xk[k]-xk[k+1])*sqrt(3.)/6); yn[(k*4)+3]=(yk[k]+2*yk[k+1])/3; } yn[4*(n-1)]=yk[n-1]; xn[4*(n-1)]=xk[n-1]; FILE * fpkoch = fopen("koch.dat","w"); for (int l=0;l<(4*(n-1)+1);l++){// Copy yn and xn into yk and xk. yk[l]=yn[l]; xk[l]=xn[l]; } if (j==jmax-1){//Output the points of the curve after the last interation. for (int l=0;l<(4*(n-1)+1);l++){ fprintf(fpkoch,"%g %g\n",xk[l],yk[l]); } } n=(n-1)*4+1; }
Let us check-out the obtained snowflake.
Looks O.K. Note that the curve is defined at a much finer resolution than is displayed in the plot.
The Algorithm
To find the fractal dimension of this curve we iteratively refine the grid and log how many cells are located on the curve.
FILE * fpit = fopen("cells.dat","w"); X0=-0.5; Y0=-1.2; L0=2.0; init_grid(32); scalar c[]; for (int m=0;m<7;m++){ foreach() c[]=0; for(int i=0;i<n;i+=64/pow(2,m)){ Point point = locate(xn[i],yn[i]); c[]+=1.0; } int gcc=0; int gcn=0; foreach(reduction(+:gcc) reduction(+:gcn)){ gcn++; if(c[]>0.5) gcc++; }
For the refinment we (once again) have faith in the wavelet based adaption algorithm to refine (and coarsen) the grid.
boundary({c}); while(adapt_wavelet({c},(double[]){0.1},(7+m),4,{c}).nf); fprintf(fpit,"%d %d %d\n",m,gcc,gcn); fflush(fpit); int i=m; scalar lev[]; foreach(){ lev[]=level; } static FILE * fp1 = popen ("ppm2gif --delay 200 > f.gif", "w"); output_ppm (lev, fp1,512,min=4,max=15); } }
Results
The adaptive grid that is used to refine the cells in the neighborhood of the curve is visualized below. More red colors represent a higher level of refinement.
The various stages of refinement
Again the displayed resolution () is much less than the final resulution of our analysis (that corresponds to ).
Now we check if we can find the fractal dimension of this curve by plotting the number of grid cells against the refinement iteration.
Apparently the fractal dimension is 1.26 and not 1. The latter value would be typical for a ‘regular’ curve. As was explained in the aforementioned youtube video and also noted in another source, the fractal dimension can be expressed analytically as: . That number is rather close to the one found with the present method.
Finally, we check if the total employed number of gridcells scales with the fractal dimension of this “”.
It does appear to be the case. Well done adapt_wavelet() function!
Follow-up
There are two other, related cases to adress the following questions: | http://basilisk.fr/sandbox/Antoonvh/koch.c | CC-MAIN-2018-43 | refinedweb | 733 | 66.64 |
build_requires support
Bug Description
In bug #110133 (setup.py needs code from another egg to build) the case was discussed where a setup.py of a package could not proceed because other eggs were expected to be on the python path. A mention of build_requires was made as a possible solution, though in that this would not work, as I had no control over the other package's setup.py.
This time I'm writing a package myself so I could set a build_requires to mention that cython needs to be installed before building (http://
Here's my setup.py:
from setuptools import setup # tried with plain distutils too
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(
name = "unbound",
package_dir = {'': 'src'},
packages = ['unbound'],
ext_modules=[
Extension(
],
headers=
cmdclass = {'build_ext': build_ext},
build_
install_
)
This feature, or the feature discussed in #110133 therefore appears to be needed to let buildout install packages that need special tools (such as cython, a fork of pyrex) to let themselves be installed.
You mean setup_requires, as build_requires is a completely different beast ( http://
For what it's worth, my work around uses hexagonit.
recipe. download to fetch the Cython source and then uses iw.recipe.cmd to install it into my custom python. See the cython-src and cython-install sections in
http://
zcologia. com/sgillies/ hg/gdawg/ file/tip/ buildout. cfg | https://bugs.launchpad.net/zc.buildout/+bug/138260 | CC-MAIN-2014-15 | refinedweb | 228 | 63.49 |
public class Box
Mathematical representation of a box. Used to perform intersection and collision tests against oriented boxes.
Public Constructors
Public Methods
Inherited Methods
From class java.lang.Object
Public Constructors
public Box ()
Create a box with a center of (0,0,0) and a size of (1,1,1).
Public Methods
public Vector3 getCenter ()
Get a copy of the box's center.
Returns
- a new vector that represents the box's center
See Also
public Vector3 getExtents ()
Calculate the extents (half the size) of the box.
Returns
- a new vector that represents the box's extents
public Quaternion getRotation ()
Get a copy of the box's rotation.
Returns
- a new quaternion that represents the box's rotation
See Also
public Vector3 getSize ()
Get a copy of the box's size.
Returns
- a new vector that represents the box's size
See Also
public void setCenter (Vector3 center)
Set the center of this box.
Parameters
See Also
public void setRotation (Quaternion rotation)
Set the rotation of this box. | https://developers.google.com/ar/reference/java/sceneform/reference/com/google/ar/sceneform/collision/Box?hl=ja | CC-MAIN-2019-47 | refinedweb | 168 | 55.74 |
Apache filter library.
More...
#include "apr.h"
#include "apr_buckets.h"
#include "httpd.h"
Go to the source code of this file.
This function type is used for filter callbacks. It will be passed a pointer to "this" filter, and a "bucket brigade" brigade always belongs to the caller, but the filter is free to use the buckets within it as it sees fit. Normally, the brigade will be returned empty. Buckets may not be retained between successive calls to the filter unless they have been "set aside" with a call apr_bucket_setaside. Typically this will be done with ap_save_brigade(). Buckets removed from the brigade become the responsibility of the filter, which must arrange for them to be deleted, either by doing so directly or by inserting them in a brigade which will subsequently be destroyed.
For the input and output filters, the return value of a filter should be an APR status value. For the init function, the return value should be an HTTP error code or OK if it was successful.
Apache filter library.
input filtering modes
The filter should return at most readbytes data.
The filter should return at most one line of CRLF data. (If a potential line is too long or no CRLF is found, the filter may return partial data).
The filter should implicitly eat any CRLF pairs that it sees.
The filter read should be treated as speculative and any returned data should be stored for later retrieval in another mode.
The filter read should be exhaustive and read until it can not read any more. Use this mode with extreme caution.
The filter should initialize the connection if needed, NNTP or FTP over SSL for example. | https://ci.apache.org/projects/httpd/trunk/doxygen/util__filter_8h.html | CC-MAIN-2019-04 | refinedweb | 282 | 74.9 |
All - O'Reilly Media 2017-05-22T23:29:36.</p> <p><strong>How is emotion AI related to sentiment analysis for natural language processing? </strong></p> <p>Social scientists who have studied how people portray emotions in conversation found that only 7-10% of the emotional meaning of a message is conveyed through the words. We can mine Twitter, for example, on text sentiment, but that only gets us so far. About 35-40% is conveyed in tone of voice—how you say something—and the remaining 50-60% is read through facial expressions and gestures you make. Technology that reads your emotional state, for example by combining facial and voice expressions, represents the emotion AI space. They are the subconscious, natural way we communicate emotion, which is nonverbal and which complements our language. What we say is also very cognitive—we have to think about what we are going to say. Facial expressions and speech actually deal more with the subconscious, and are more unbiased and unfiltered expressions of emotion.</p> <p><strong>What techniques and training data do machines use to perceive emotion?</strong></p> <p>At Affectiva, we use a variety of computer vision and machine learning approaches, including deep learning. Our technology, like many computer vision approaches, relies on machine learning techniques in which algorithms learn from examples (training data). Rather than encoding specific rules that depict when a person is making a specific expression, we instead focus our attention on building intelligent algorithms that can be trained to recognize expressions.<).</p> <p><strong>Where is emotion AI currently seeing the most market traction?</strong></p> <p <a href="">recent Pepsi ad</a>.</p> <p.</p> <p><strong>As Affectiva has grown from a research project at MIT to a company, what has been most surprising to you? </strong></p> .</p> <p><strong>Your Ph.D. is in computer science. Are there any specialized skills you needed to learn to enable computers to recognize human expressions?</strong></p> <p evaluate the performance of two major online fact-checkers, Politfact at Tampa Bay Times and Fact Checker at Washington Post, comparing their interrater reliability using a method that is regularly utilized across the social sciences. I show that fact-checkers rarely fact-check the same statement, and when they do, there is little agreement in their ratings. Approximately, 1 in 10 statements is fact-checked by both fact-checking outlets, and among claims that both outlets check, their factual ratings have a Cohen’s κ of 0.52, an agreement rate much lower than what is acceptable for social scientific coding. The results suggest that difficulties in fact-checking elites’ statements may limit the ability of journalistic fact-checking to hold politicians accountable.<> on the space in a thoughtful and highly accurate way. The implication of all of this is that this is big business. And it is. Pokemon GO was a flash-trend that shone a spotlight on how this can excite people. It also proved that AR can be used to make money. AR is poised to make a big leap in the next five years.<.</p> <p.</p> <h2>Modern data platforms</h2> <p.</p> <em>might</em> be doable, but how can you budget and plan when requirements are constantly evolving?</p> <h2>Enter embedded analytics</h2> <p.</p> <p.</p> <p.</p> <p>Let us consider a more elementary, but still puzzling, trade-off, that between addition and multiplication. How many multiplications does it take to evaluate the 3 X 3 determinant? If we write out the expansion as six trinomials, we need 12!</i> The interesting work currently being done in formal systems has a long heritage, but struggled for attention and interest in researchers for a long time.</li> <li> <a href="">Questions & Intuition for Tackling Deep Learning Problems</a> -- a great list. <i>Never mind a neural network; can a human with no prior knowledge, educated on nothing but a diet of your training data set, solve the problem? Is your network looking at your data through the right lens? Is your network learning the quirks in your training data set, or is it learning to solve the problem at hand? Does your network have siblings that can give it a leg-up (through pre-trained weights)? Is your network incapable or just lazy? If it’s the latter, how do you force it to learn?</i> </li> <li> <a href="">Computer Games that Make Assembly Language Fun</a> (IEEE Spectrum) -- <i>three polished games that do a surprisingly good job of making coding in assembly language fun. To be clear, none of these titles involve writing assembly for real hardware. They all use virtual systems with minimal instruction sets. Still, they do capture the essence of assembly coding, with complex behaviors squeezed out of simple commands.<</a> — Drawing inspiration from the original ImageNet project led by Fei-Fei Li, Curt Langlotz’s lab at Stanford University has been building a Medical ImageNet repository that "contains 0.5 petabyte of clinical radiology data, comprising 4.5 million studies, and over 1 billion images.” Work is still underway, but they expect to release these data sets soon.</li> <li> <a href="">Jensen Huang at GTC 2017</a> — NVIDIA’s GTC developer conference took place this week with founder and CEO Jensen Huang taking the stage to deliver the keynote. It took a full two hours, but Engadget has compiled a 13-minute highlight reel to fill you in. NVIDIA shareholders must be thrilled with Jensen’s announcements, as NVIDIA stock got a nice bump following his keynote.<</a>" — "Our ability to know the price of anything, anytime, anywhere, has given us, the consumers, so much power that retailers—in a desperate effort to regain the upper hand, or at least avoid extinction—are now staring back through the screen. They are comparison shopping us.” Ouch. Data and machine learning have empowered online sellers to master pricing elasticity and consumer dollar extraction.</li> </ol> <p>Continue reading <a href=''>Intelligent Bits: 12 May 2017.</a></p><img src="" height="1" width="1" alt=""/> Roger Chen Why self assessments improve learning 2017-05-12T10:00:00Z tag: <p><img src=''/></p><p><em>O’Reilly’s assessment tool puts the focus on the learner, not arbitrary scores.</em></p><p>We just don’t get it. Looking across the industry, most services promoted as “assessments” are basically memory tests or trivial click-to-own certifications. In one case, we discovered a site where 10 clicks through a video path was all that it took to get awarded a certificate, obtained in less than 10 seconds. Moreover, too much emphasis gets placed on scores provided to some third party, not on feedback to professionals so they can improve their learning experiences. We decided learners deserve better and we can deliver better.<>Summative assessment measures how much students have learned up to a particular point in time, generally to meet some standard. Examples include final exams in university courses, or professional certifications. While those serve important needs in testing, their results are intended for someone other than the person taking the exam. Think: grades.</p> <p>In contrast, formative assessment gives feedback during testing. It’s considered part of the learning experience, and need not be graded. Questions are constructed such that if you understand the material, the answers are quick. On the other hand, if you’re struggling with a subject, you’ll need to spend much more time working through the questions.<>Part of our job at O’Reilly is to help learners gauge how well they understand important topics. Individuals, teams, and organizations need that feedback. In one view, we’d simply create comprehensive exams for each of the most important subjects, require people to take those exams, then provide grades. Even if that kind of linear, scholastic world ever existed, it’s long gone.</p> <p>There's a problem with how some traditional approaches in education tend to view the world. Aggregate raw scores from employees taking exams won't help your organization succeed. Instead, you need to know how much your employees are struggling with information overload, huge learning curves, typical misconceptions, industry “fear, uncertainty, and doubt” (<a href="">FUD</a>), etc.</p> <p>Also, keep in mind that the learning materials needed by people in industry span a range of important subject areas: design thinking, software architecture, data science, systems engineering, programming languages, security, machine learning, product management, leadership, and more. Most of these areas require expertise with increasingly complex technology stacks, often organized in several layers.</p> <p>Here’s an example: Popular open source projects tend to push new releases frequently—every few months. So the challenge of maintaining “hands on” familiarity becomes more complex. Meanwhile, vendors have incentives to spread good claims about their products, and not-so-good claims about their competition. Developers, product managers, and execs alike get bombarded with FUD. The presumption that any individual can keep pace with all the changing layers and components in a given tech stack becomes nearly absurd. That’s why these mythical people are called “unicorns.” Pulling those factors together, how do you develop a process of creating a comprehensive exam, reduced to a numeric score, that can measure someone’s actual understanding of a complex technology stack? Frankly, it’s an imaginary thing.</p> <p>Instead, O’Reilly approaches learning and feedback at a human scale. We’ve been moving away from large books and lengthy videos, breaking out of those arbitrary containers to provide much more readily accessible learning experiences.</p> <p>Likewise, O’Reilly has a new approach to “what” we measure through assessment, and “how” we measure that, addressing the challenges mentioned above. Self assessments go through a rigorous process with the goal of distinguishing between those people who have invested time to learn a technology and those who are “phoning it in” or perhaps have been confused by industry FUD. The result is a feedback mechanism that helps learners learn.<>JavaScript development is complicated because we keep raising the bar. A few years ago, few people even minified or tested their code. We slapped a script tag on the page and started writing code in the global namespace. But applications have grown increasingly complex and JavaScript has grown up. So today, we're expected to bundle to save HTTP requests, to minify to save bandwidth, to transpile in order to provide cross-browser support, and much more.<.</i> </li> <li> <a href="">Learning to Fly by Crashing</a> (PDF) -- <i>Data executives might spend most of their time on technical and vendor management, but their work ultimately comes down to the task of building an effective data culture. Reorienting a company around data-driven decision-making takes more than just software tools; it also involves training your employees to understand data essentials, establishing processes that safeguard data and clarify its ownership, working with line-of-business managers to set expectations and goals, and generally striking the right balance between risk-taking and caution.<>, to introduce executives to the value of data and to the essential steps that managers need to take in order to exploit that value. In this excerpt from their report, Mason and Patil describe the importance of democratizing data—making sure that employees who might need data have access to it, and making sure they have the resources to interpret</a>),.< supports collaborative program editing similar to how Google Docs work.<.</p> <p.<>. An array of "opinionated stacks of ready-to-run Jupyter applications in Docker." Basically, it's a collection of layered Dockerfiles that progress from a minimal notebook through a basic setup for data science all the way to a full PySpark setup. If you're sick of struggling with trying to configure your machine—or worse, someone else's machine—this is an absolutely invaluable resource.< Intelligent Bits: 05 May 2017 2017-05-05T10:30:00Z tag: <p><img src=''/></p><p><em>Caffe2, deep learning best practices, intelligent design and wizard hats</em></p><ol> <li>Get your fix — <a href="">Facebook open sources Caffe2</a>, building off the original Caffe framework and oriented towards large-scale models and mobile.</li> <li>Deep learning primer — Wondering what deep learning can do for you, but not sure were to start? Leslie Smith shares "<a href="">Best Practices for Applying Deep Learning to Novel Applications</a>" to help.</li> <li>Intelligent design interfaces — Patrick Hebron on "<a href="">Rethinking Design Tools in the Age of Machine Learning</a>"</li> <li>“Wizard hat for the brain” — Tim Urban dives deep into Elon Musk’s master plan for saving humanity from AI...<a href="">by plugging in our brains</a>.</li> </ol> <p>Continue reading <a href=''>Intelligent Bits: 05 May 2017.</a></p><img src="" height="1" width="1" alt=""/> Roger Chen | http://feeds.feedburner.com/oreilly/radar/atom?m=539 | CC-MAIN-2017-22 | refinedweb | 2,156 | 54.12 |
import itertools
Start with an iterable which needs to be grouped
lst = [("a", 5, 6), ("b", 2, 4), ("a", 2, 5), ("c", 2, 6)]
Generate the grouped generator, grouping by the second element in each tuple:
def testGroupBy(lst): groups = itertools.groupby(lst, key=lambda x: x[1]) for key, group in groups: print(key, list(group)) testGroupBy(lst) # 5 [('a', 5, 6)] # 2 [('b', 2, 4), ('a', 2, 5), ('c', 2, 6)]
Only groups of consecutive elements are grouped. You may need to sort by the same key before calling groupby For E.g, (Last element is changed)
lst = [("a", 5, 6), ("b", 2, 4), ("a", 2, 5), ("c", 5, 6)] testGroupBy(lst) # 5 [('a', 5, 6)] # 2 [('b', 2, 4), ('a', 2, 5)] # 5 [('c', 5, 6)]
The group returned by groupby is an iterator that will be invalid before next iteration. E.g the following will not work if you want the groups to be sorted by key. Group 5 is empty below because when group 2 is fetched it invalidates 5
lst = [("a", 5, 6), ("b", 2, 4), ("a", 2, 5), ("c", 2, 6)] groups = itertools.groupby(lst, key=lambda x: x[1]) for key, group in sorted(groups): print(key, list(group)) # 2 [('c', 2, 6)] # 5 []
To correctly do sorting, create a list from the iterator before sorting
groups = itertools.groupby(lst, key=lambda x: x[1]) for key, group in sorted((key, list(group)) for key, group in groups): print(key, list(group)) # 2 [('b', 2, 4), ('a', 2, 5), ('c', 2, 6)] # 5 [('a', 5, 6)]
Itertools "islice" allows you to slice a generator:
results = fetch_paged_results() # returns a generator limit = 20 # Only want the first 20 results for data in itertools.islice(results, limit): print(data)
Normally you cannot slice a generator:
def gen(): n = 0 while n < 20: n += 1 yield n for part in gen()[:3]: print(part)
Will give
Traceback (most recent call last): File "gen.py", line 6, in <module> for part in gen()[:3]: TypeError: 'generator' object is not subscriptable
However, this works:
import itertools def gen(): n = 0 while n < 20: n += 1 yield n for part in itertools.islice(gen(), 3): print(part)
Note that like a regular slice, you can also use
start,
stop and
step arguments:
itertools.islice(iterable, 1, 30, 3)
This function lets you iterate over the Cartesian product of a list of iterables.
For example,
for x, y in itertools.product(xrange(10), xrange(10)): print x, y
is equivalent to
for x in xrange(10): for y in xrange(10): print x, y
Like all python functions that accept a variable number of arguments, we can pass a list to itertools.product for unpacking, with the * operator.
Thus,
its = [xrange(10)] * 2 for x,y in itertools.product(*its): print x, y
produces the same results as both of the previous examples.
>>> from itertools import product >>> a=[1,2,3,4] >>> b=['a','b','c'] >>> product(a,b) <itertools.product object at 0x0000000002712F78> >>> for i in product(a,b): ... print i ... (1, 'a') (1, 'b') (1, 'c') (2, 'a') (2, 'b') (2, 'c') (3, 'a') (3, 'b') (3, 'c') (4, 'a') (4, 'b') (4, 'c')
Introduction:
This simple function generates infinite series of numbers. For example...
for number in itertools.count(): if number > 20: break print(number)
Note that we must break or it prints forever!
Output:
0 1 2 3 4 5 6 7 8 9 10
Arguments:
count() takes two arguments,
start and
step:
for number in itertools.count(start=10, step=4): print(number) if number > 20: break
Output:
10 14 18 22
itertools.takewhile enables you to take items from a sequence until a condition first becomes
False.
def is_even(x): return x % 2 == 0 lst = [0, 2, 4, 12, 18, 13, 14, 22, 23, 44] result = list(itertools.takewhile(is_even, lst)) print(result)
This outputs
[0, 2, 4, 12, 18].
Note that, the first number that violates the predicate (i.e.: the function returning a Boolean value)
is_even is,
13.
Once
takewhile encounters a value that produces
False for the given predicate, it breaks out.
The output produced by
takewhile is similar to the output generated from the code below.
def takewhile(predicate, iterable): for x in iterable: if predicate(x): yield x else: break
Note: The concatenation of results produced by
takewhile and
dropwhile produces the original iterable.
result = list(itertools.takewhile(is_even, lst)) + list(itertools.dropwhile(is_even, lst))
itertools.dropwhile enables you to take items from a sequence after a condition first becomes
False.
def is_even(x): return x % 2 == 0 lst = [0, 2, 4, 12, 18, 13, 14, 22, 23, 44] result = list(itertools.dropwhile(is_even, lst)) print(result)
This outputs
[13, 14, 22, 23, 44].
(This example is same as the example for
takewhile but using
dropwhile.)
Note that, the first number that violates the predicate (i.e.: the function returning a Boolean value)
is_even is,
13. All the elements before that, are discarded.
The output produced by
dropwhile is similar to the output generated from the code below.
def dropwhile(predicate, iterable): iterable = iter(iterable) for x in iterable: if not predicate(x): yield x break for x in iterable: yield x
The concatenation of results produced by
takewhile and
dropwhile produces the original iterable.
result = list(itertools.takewhile(is_even, lst)) + list(itertools.dropwhile(is_even, lst))
Similar to the built-in function
zip(),
itertools.zip_longest will continue iterating beyond the end of the shorter of two iterables.
from itertools import zip_longest a = [i for i in range(5)] # Length is 5 b = ['a', 'b', 'c', 'd', 'e', 'f', 'g'] # Length is 7 for i in zip_longest(a, b): x, y = i # Note that zip longest returns the values as a tuple print(x, y)
An optional
fillvalue argument can be passed (defaults to
'') like so:
for i in zip_longest(a, b, fillvalue='Hogwash!'): x, y = i # Note that zip longest returns the values as a tuple print(x, y)
In Python 2.6 and 2.7, this function is called
itertools.izip_longest.)]
Use
itertools.chain to create a single generator which will yield the values from several generators in sequence.
from itertools import chain a = (x for x in ['1', '2', '3', '4']) b = (x for x in ['x', 'y', 'z']) ' '.join(chain(a, b))
Results in:
'1 2 3 4 x y z'
As an alternate constructor, you can use the classmethod
chain.from_iterable which takes as its single parameter an iterable of iterables. To get the same result as above:
' '.join(chain.from_iterable([a,b])
While
chain can take an arbitrary number of arguments,
chain.from_iterable is the only way to chain an infinite number of iterables.
Repeat something n times:
>>> import itertools >>> for i in itertools.repeat('over-and-over', 3): ... print(i) over-and-over over-and-over over-and-over
accumulate yields a cumulative sum (or product) of numbers.
>>> import itertools as it >>> import operator >>> list(it.accumulate([1,2,3,4,5])) [1, 3, 6, 10, 15] >>> list(it.accumulate([1,2,3,4,5], func=operator.mul)) [1, 2, 6, 24, 120]
cycle is an infinite iterator.
>>> import itertools as it >>> it.cycle('ABCD') A B C D A B C D A B C D ...
Therefore, take care to give boundaries when using this to avoid an infinite loop. Example:
>>> # Iterate over each element in cycle for a fixed range >>> cycle_iterator = it.cycle('abc123') >>> [next(cycle_iterator) for i in range(0, 10)] ['a', 'b', 'c', '1', '2', '3', 'a', 'b', 'c', '1']
itertools.permutations returns a generator with successive r-length permutations of elements in the iterable.
a = [1,2,3] list(itertools.permutations(a)) # [(1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), (3, 2, 1)] list(itertools.permutations(a, 2)) [(1, 2), (1, 3), (2, 1), (2, 3), (3, 1), (3, 2)]
if the list
a has duplicate elements, the resulting permutations will have duplicate elements, you can use
set to get unique permutations:
a = [1,2,1] list(itertools.permutations(a)) # [(1, 2, 1), (1, 1, 2), (2, 1, 1), (2, 1, 1), (1, 1, 2), (1, 2, 1)] set(itertools.permutations(a)) # {(1, 1, 2), (1, 2, 1), (2, 1, 1)} | https://sodocumentation.net/python/topic/1564/itertools-module | CC-MAIN-2021-21 | refinedweb | 1,380 | 63.09 |
A C# program consists of the following parts:
- Namespace declaration
- A class
- Class methods
- Class attributes
- A Main method
- Statements and Expressions
Let us look at the below simple code::
· The first line of the program using System; - the using keyword is used to include the System namespace in the program. A program generally has multiple using statements.
· The next line has the namespace declaration. A namespace is a collection of classes. The HelloWorldApplication namespace contains the class HelloWorld.
· The next line has a class declaration, the class HelloWorld contains the data and method definitions that your program uses. Classes generally contain multiple methods. Methods define the behavior of the class. However, the HelloWorld class has only one method Main.
· The next line defines the Main method, which is the entry point for all C# programs. The Main method states what the class does when executed.
· The next line /*...*/ is ignored by the compiler and it is put to add comments in the program.
· The Main method specifies its behavior with the statement Console.WriteLine("Hello World");
WriteLine is a method of the Console class defined in the System namespace. This statement causes the message "Hello, World!" to be displayed on the screen.
· The last line Console.ReadKey(); is for the VS.NET Users. This makes the program wait for a key press and it prevents the screen from running and closing quickly when the program is launched from Visual Studio .NET.
It is worth to note the following points:
· C# is case sensitive.
· All statements and expression must end with a semicolon (;).
· The program execution starts at the Main method.
· Unlike Java, program file name could be different from the class name. | http://blogs.binarytitans.com/2017/04/c-program-structure.html | CC-MAIN-2018-13 | refinedweb | 283 | 68.16 |
Hi everyone i have a question about how to use a class so i make this script:
>>> class className:
def createName(self,name):
self.name=name
def displayName(self):
return self.name
def saying(self):
print 'Hello %s' % self.name
>>> first=className
>>> first.createName('X')
I learn this from a tutorial but i have an error:
Traceback (most recent call last):
File "<pyshell#11>", line 1, in <module>
first.createName('X')
TypeError: unbound method createName() must be called with className instance as first argument (got str instance instead)
the tutorial i watched is using 2.6 and i'm using 2.7 also what is the difference between return and print in def
and why i need to use (self,name) instead of only (self)
Thank you very much | https://www.gamedev.net/topic/650158-how-to-use-classand-what-is-return/ | CC-MAIN-2017-04 | refinedweb | 129 | 57.37 |
Following my series from less known features of Modern C++.
This feature is small and minor one so this article is small but an important for sure.
From C++ 17 you can declare temporary variables on
if and
switch statements just like loops. As the variables are temporary, you can't access those variables as they are only declared in the
if or
switch block.
If Statement
#include <iostream> int main() { if (int i = 0; i > 0) { std::cout << "i is positive" << std::endl; } else if (i < 0) { std::cout << "i is negative" << std::endl; } else { std::cout << "i is zero" << std::endl; } // std::cout << i << std::endl; // i will not be available here }
Switch Statement
#include <iostream> int main() { switch (int i = 0) { case 0: std::cout << "i is negative"; break; case 1: std::cout << "i is one"; default: std::cout << "invalid"; break; } //std::cout << i << std::endl; // i will not be available here }
If we comment out the last line of both of the examples then an error will be raised.
main.cpp: In function ‘int main()’: main.cpp:26:17: error: ‘i’ was not declared in this scope std::cout << i << std::endl; // i will not be available here
This feature is useful when you need to temporary declare variables for checking and minor comparisons.
Posted on Jun 19 by:
Swastik Baranwal
A noob
Read Next
Tips for Running Scripts in Production
Molly Struve (she/her) -
FREE 3 Hour Azure Fundamentals (AZ-900) Certification Course (100+ Videos!) 😱
Andrew Brown 🇨🇦 -
18 Programming YouTube Channels that you shouldn't miss
Tharun Shiv -
Discussion
Now this is cool.
Thanks! More coming soon :)
Also my Code for my Articles series is now at github.com/Delta456/modern_cpp_series | https://dev.to/delta456/modern-c-temp-vars-in-if-switch-ig0 | CC-MAIN-2020-29 | refinedweb | 286 | 62.01 |
tmpnam man page
tmpnam, tmpnam_r — create a name for a temporary file
Synopsis
#include <stdio.h> char *tmpnam(char *s); char *tmpnam_r(char *s);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
tmpnam_r()
- Since glibc 2.19:
_DEFAULT_SOURCE
- Up to and including glibc 2.19:
_BSD_SOURCE || _SVID_SOURCE
Description
Note: avoid using these functions; use mkstemp(3) or tmpfile(3) instead..
Return Value
These functions return a pointer to a unique temporary filename, or NULL if a unique name cannot be generated.
Errors
No errors are defined.
Attributes
For an explanation of the terms used in this section, see attributes(7).
Conforming to
tmpnam(): SVr4, 4.3BSD, C89, C99, POSIX.1-2001. POSIX.1-2008 marks tmpnam() as obsolete.
tmpnam_r() is a nonstandard extension that is also available on a few other systems.
Notes.
Bugs
Never use these functions. Use mkstemp(3) or tmpfile(3) instead.
See Also
mkstemp(3), mktemp(3), tempnam(3), tmpfile(3)
Colophon
This page is part of release 4.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
Referenced By
explain(1), explain(3), explain_tmpnam(3), explain_tmpnam_or_die(3), getpid(2), mkdtemp(3), mkstemp(3), mktemp(3), tempnam(3), tmpfile(3).
The man page tmpnam_r(3) is an alias of tmpnam(3). | https://www.mankier.com/3/tmpnam | CC-MAIN-2017-47 | refinedweb | 224 | 60.72 |
I get physical addresses of function "printf" in libc.so in two programs, and two physical addresses are different. And I read two different physical address, the content are almost the same. This means function "printf" has two copies in memory?
Details:
I get physical addresses of function "printf" in libc.so in two programs, and two physical addresses are different.
You are probably doing it wrong (but it's hard to guess: you didn't provide any details).
In particular, note that the following program:
#include <stdio.h> int main() { printf("&printf: %p\n", &printf); return 0; }
does not print the actual address of
printf in
libc.so.6, as can be observed with GDB:
(gdb) start Temporary breakpoint 1 at 0x8048426: file pagemap.c, line 5. Starting program: /tmp/pagemap Temporary breakpoint 1, main () at pagemap.c:5 5 printf("&printf: %p\n", &printf); (gdb) n &printf: 0x80482f0 6 return 0; (gdb) info symbol 0x80482f0 printf@plt in section .plt of /tmp/pagemap (gdb) p &printf $1 = (<text variable, no debug info> *) 0xf7e5add0 <printf> (gdb) info sym 0xf7e5add0 printf in section .text of /lib32/libc.so.6
Note that
printf @0x80482f0 is in the main executable, and is not supposed to be shared (except between multiple instances of the same executable running at the same time), and is not where code for
printf actually resides.
The
printf @0xf7e5add0 is in
libc.so.6, and that is where code for
printf actually is. That page is supposed to be shared by all processes using
libc.so.6.
P.S. To get the actual address of
printf in
libc.so.6, one may use this program instead:
#include <stdio.h> #include <dlfcn.h> int main() { printf("&printf: %p\n", &printf); printf("&printf: %p\n", dlsym(RTLD_NEXT, "printf")); return 0; } gcc -g pagemap.c -o pagemap -m32 -ldl -D_GNU_SOURCE (gdb) run Starting program: /tmp/pagemap &printf: 0x80483c0 &printf: 0xf7e55dd0 (gdb) info sym 0xf7e55dd0 printf in section .text of /lib32/libc.so.6 | https://codedump.io/share/pZs05MtYIBj2/1/function-of-shared-library-is-loaded-at-different-physical-addresses-for-different-processes | CC-MAIN-2017-39 | refinedweb | 331 | 76.52 |
I.Tweet to the professionals. (Except that, technically, I am one of those professionals.)
And then, a few years into my involvement with the Python community, I started teaching again (I was a primary school teacher long before I entered the programming world).
Then I gave my first talk at PyCon. And nothing bad happened.
I repeat: Nothing bad happened.
I've still only done one full-fledged conference talk, plus a handful of lightning talks, and I'm about to give my second conference talk at DjangoCon this fall.
So take my advice with a handful of salt. I'm still not an expert speaker. But I no longer fear the possibility. I'm not afraid to submit a proposal if I have a good idea, and I'm actually looking forward to being onstage again.
There's nothing wrong with being afraid of public speaking. Appearing before a group of people of any size triggers our fight-or-flight response, and that's natural, normal, and everyone - even the most seasoned speakers - experiences it.
A part of our human brains has evolved over millions of years to respond to being confronted with predators. Our adrenal glands kick into high gear and our heart rate increases, leaving us prepared to fight, or flee if we need to.
That same brain just hasn't realized yet that you'll be standing up in front of a bunch of supportive fellow humans, not a pack of hyenas.
Conference speaking has some obvious career benefits - it increases your visibility and helps with networking. When you speak about a topic you know, people with common interests are going to want to get to know you.
But public speaking also gives you a sense of accomplishment. You've faced a challenge that lots of people never attempt. You've conquered a fear. And that can boost your confidence in ways you never imagined.
So maybe some of the advice I have here will help you, whether you're a first-time speaker or just contemplating becoming one.
A slide deck with notes is great, but I also love using hand-written notes on index cards. I glance at them occasionally, enough to keep me on track but not so much that they become a distraction. And it feels good to have something in my hands.
If you have glasses, wear them - they create a small psychological barrier between you and the audience, without actually being a barrier between you and the audience. (And if it helps you to keep some imaginary distance, remember that it will probably be difficult to see everyone out there anyway.)
Want to keep a teddy bear with you on stage? Squeeze a stress toy (as long as it doesn't make noise)? Whatever works for you and makes you feel safe. You do you - no one's going to care. In fact, they're all going to be rooting for you.
It's true that some people can submit a proposal on almost any random thing, then immerse themselves in their topic as they prepare the talk. And maybe someday you and I will be able to do that too. But I wouldn't recommend it until you've got a few speaking experiences under your belt.
How do you know if you know your subject well enough? If you've been teaching it to other people. If you've been having discussions about it with friends. If you get excited when people ask for your advice about it, or you find yourself answering questions about it in everyday conversation.
You don't necessarily need to take questions when you give a conference talk, but if you feel like you could, then you've probably got the confidence you need to give the talk in the first place.
If you're feeling nervous, your first conference shouldn't also be your first public speaking experience.
Participate in a few community events. Get involved with your local user group. (A user group can be a great place to practice a first talk, by the way, and most of them are clamoring for new speakers!)
Get to know people. When you eventually give a talk, it will feel less terrifying, more like you're just hanging out and telling a story to a group of friends.
In spite of all your preparation, you'll still feel a few moments of anxiety when you hit that stage. Everyone does. Just take a few deep breaths - count them, in fact. Your brain's perception of the threat level will drop, and your body will respond by relaxing and calming.
And keep in mind my two criteria for a successful talk:
Anything after that is just icing.Tweet French (thanks to two talented Montreal Pythonistas, Davin Baragiotta and David Cormier). And on Sunday, we presented the class in English, with some teaching help from Naomi Ceder and Richard Jones.
To recap, the classroom workstations consist of Raspberry Pis and the usual peripherals (keyboard, mouse, monitor). We don't use the ethernet connection - none of our beginner material requires internet connectivity, and we'd risk losing the students to Facebook and email anyway. The image we use includes Python 2/3 and comes with Idle installed - that's really all we need, since just about everything we cover is at the interpreter level.
The curriculum hasn't changed much over the years. We start with a quick chat about what computers do and what programming means, then spend the morning covering the most basic of Python basics: strings, math, Booleans, variables, lists, some functions and logic, even a discussion of basic error interpretation.
After a lunch break, the rest of the class time is devoted to editing and playing games (using Al Sweigart's pygame library, which also comes installed on the Raspian/NOOB image).
This year we had a great last-minute addition to the program. On Friday morning, I saw Kurt Grandis' terrific talk - Exploring Minecraft and Python: Learning to Code Through Play. I was sitting with Richard Jones, musing about how cool it would be to integrate Minecraft into the Young Coders curriculum for 2016. I hadn't given it much thought before, so I didn't realize how close we were. And then Kurt started talking about this mcpi library that already exists for interacting with Minecraft on the Raspberry Pi.
That afternoon we went to the classroom to start working on the RPi setup for the Saturday class and discovered that we were working with a more recent version of the Raspian image. As of September 2014, Raspian has the mcpi library installed. We spent a good few hours that evening playing around with Minecraft on the Rpis, then were able to make it a part of both classes that weekend. Needless to say, it was a big hit.
But for all the fun we created in the classroom that weekend (it's so great seeing kids excited about learning!), I was reminded of limitations. This year, we capped enrollment at 40 per class, and 40 was just too many. With that many students, it becomes hard to keep everyone moving at the same pace. Adding more TAs or teachers to the mix doesn't really help, it just causes the levels of noise and chaos in the room to go up and makes it very hard to focus on the material.
This year, we also had a fair number of students who were too experienced for the beginner material. I always hate seeing that - I know those kids are going to be bored through about 80% of what we talk about. Unfortunately, up to now a beginner class is all we've been offering. I should have singled those kids out early on and asked them to transition to being TAs for the duration, helping their fellow students, but there was just too much going on with the class that large. Remember this when you plan your own Young Coders event. (You are planning one, right?)
So with all that in mind, I'm working on developing an intermediate curriculum for PyCon 2016.
Aside from writing the curriculum itself, there are going to be some challenges involved. I would like to continue to use the Rapberry Pis, so that setup is familiar and our budget doesn't change, and so that students are working on a common operating system and can take their workstations with them at the end of the day.
So far I've got the frameworks laid out for a couple of game and web projects. I'm currently reviewing learning tools that will make it easy to build these projects in a classroom setting, while still giving students a taste of what it's like to write and publish your own Python code in a real production environment.
PyCon 2016 is still over a year away, so there's a good chance the time will get away from me and I'll finish up the intermediate program a few weeks before heading to Portland. (I finished up materials for the first Young Coders class the week before, and reviewed it with Katie over dinner the night before we went into the classroom for the first time.)
But I'll do my best to get ahead of the planning and share what I'm working on over the next few months.
See you in Portland in 2016!Tweet, so you might find this useful if you're in the same boat.
Let's start with the methods needed to add some custom content to a <content:encoded> element.
The Django feed library comes with a set of standard elements for which you must define the content: <title>, <link>, and <description> for the feed, and then of course <title>, <link>, and <description> for individual feed items.
In our use case, we have a feed containing a list of news stories. We're already sending a truncated version of each story's content to <description>, but we want to add an additional field - <content:encoded> - to return the story's full content.
To add an additional element (or two or three), there are a few places you'll need to update - two (possibly three) standard feed methods and whatever custom method(s) you need to populate the new elements.
In this code sample, follow the trail from item_extra_kwargs() to item_your_custom_field() to add_item_elements().
from django.contrib.syndication.views import Feed from django.utils.feedgenerator import Rss201rev2Feed class ExtendedRSSFeed(Rss201rev2Feed): """ Create a type of RSS feed that has content:encoded elements. """ def root_attributes(self): attrs = super(ExtendedRSSFeed, self).root_attributes() # Because I'm adding a <content:encoded> field, I first need to declare # the content namespace. For more information on how this works, check # out: attrs['xmlns:content'] = '' return attrs def add_item_elements(self, handler, item): super(ExtendedRSSFeed, self).add_item_elements(handler, item) # 'content_encoded' is added to the item below, in item_extra_kwargs() # It's populated in item_your_custom_field(). Here we're creating # the <content:encoded> element and adding it to our feed xml if item['content_encoded'] is not None: handler.addQuickElement(u'content_encoded', item['content_encoded']) ... class YourFeed(Feed): feed_type = ExtendedRSSFeed .... def item_extra_kwargs(self, item): # This is probably the first place you'll add a reference to the new # content. Start by superclassing the method, then append your # extra field and call the method you'll use to populate it. extra = super(YourFeed, self).item_extra_kwargs(item) extra.update({'content_encoded': self.item_your_custom_field(item)}) return extra def item_your_custom_field(self, item): # This is your custom method for populating the field. # Name it whatever you want, so long as it matches what # you're calling from item_extra_kwargs(). # What you do here is entirely dependent on what your # system looks like. I'm using a simple queryset example, # but this is not to be taken literally. obj_id = item['my_item_id'] query_obj = MyStoryModel.objects.get(pk=obj_id) full_text = query_obj['full_story_content'] return full_text
This generates a feed that looks something like this:
My actual use case called for me to extend from a feed that already existed, leaving that original feed intact and only including the new element in the new feed. Here's how you'd do that:
class YourFeed(Feed): feed_type = ExtendedRSSFeed .... def item_extra_kwargs(self, item): extra = super(YourFeed, self).item_extra_kwargs(item) extra.update({'content_encoded': self.item_your_custom_field(item)}) return extra def item_your_custom_field(self, item): return None class YourNewFeed(YourFeed): def item_your_custom_field(self, item): ... return full_text
So in the original feed, 'content_encoded' comes back as None and <content:encoded> never appears. It is only generated for the new feed.
The customer requesting this new feed actually asked for html wrapped in a CDATA section. I never did figure out how to do that with the Django syndicator alone - the CDATA tag always came out encoded, there didn't seem to be any way around that. And every blog post I found lead me back to this old bug ticket - - which suggests just ditching the CDATA section and letting Django handle the encoding. I tried that, but it didn't pass the W3C feed validator - more on that later.
One of the things we tried along the way was a custom template for the CDATA content. That didn't work for creating a CDATA section, as ultimately there was no way to prevent the tag from being encoded. But I didn't find many clear posts about how to do this so I thought I'd share an outline of the attempt here:
from django.template import loader, Context, TemplateDoesNotExist ... def item_extra_kwargs(self, item): extra = super(ListDetailRSS, self).item_extra_kwargs(item) # Define a template - give it any name, the one below is just an example. # The path will obviously depend on your settings. content_encoded_template = 'feeds/list_detail_content_encoded.html' try: # Use the Django template loader to get the template content_encoded_tmp = loader.get_template(content_encoded_template) # Set the field value as template context content_encoded = content_encoded_tmp.render( Context({'myobj': self.item_your_custom_field(item)})) # Then update your extra kwargs with the rendered template # instead of the original value returned from your custom method extra.update({'content_encoded': content_encoded}) except TemplateDoesNotExist: # And if you don't have a template, just use the content as # returned from your custom method extra.update({'content_encoded': self.item_your_custom_field(item)}) return extra
Your template can be as simple as this:
{{ myobj }}
This can be useful if you want to customize your value by wrapping some text around it or maybe apply template filters before it's rendered.
Our new <content:encoded> element is supposed to have a few other fields inside it. What I'm showing you here ultimately didn't work for us (see the encoding section below), but I did learn a thing or two about how to wrap elements inside other elements in ways that aren't covered in the Django documentation (I should get on adding that, right?).
def add_item_elements(self, handler, item): super(ExtendedRSSFeed, self).add_item_elements(handler, item) if item['content_encoded'] is not None: # <content:encoded> is going to wrap around some other elements, # so instead of using handler.addQuickElement() we're going to # use startElement() (and then end it later) handler.startElement(u"content:encoded", {}) # handler.characters() fills in content between the tags, e.g.: # <content:encoded>This is where the content goes.</content:encoded> handler.characters(item['content_encoded']) # And close the element, ba-bam. handler.endElement(u"content:encoded")
If you wanted to apply attributes to the element itself, that empty dict you set at startElement() would look like this instead:
if item['content_encoded'] is not None: handler.startElement(u"content:encoded", {'my-attribute': 'my-value'})
And here's the wrapping around other elements bit:
if item['content_encoded'] is not None: handler.startElement(u"content:encoded", {}) handler.characters(item['content_encoded']) # Suppose we have a photo to go along with this story if item['media'] is not None: handler.startElement(u'figure', {'type': 'image/jpeg'}) handler.startElement(u'image', { 'src': item['media']['src'], 'caption': item['media']['caption'] }) handler.endElement(u'image') handler.endElement(u'figure') handler.endElement(u"content:encoded")
Back to that old bug ticket. We ultimately decided to follow the sage advice to forget about CDATA, even though the suggested code didn't work exactly as described (whether that's because of our old version of Django, or our version customizations, I don't know, but I never had time to research it).
Instead, we had to ... double encode? Or rather, escape, then let Django encoding do its thing.
After all that work to wrap elements one inside the other, our feed still wasn't validating. So instead of creating them as elements, we just converted the tags to strings:
if item['content_encoded'] is not None: handler.startElement(u"content:encoded", {}) handler.characters(item['content_encoded']) if item['media'] is not None: figure = '<figure type="image/jpeg">' figure += '<image src="%s" caption="%"></image>' % \ (item['media']['src'], item['media']['caption']) figure += '</image></figure>' # Don't forget to stick that string in the middle of # the <content:encoded> element: handler.characters(figure) handler.endElement(u"content:encoded")
Ugly, yes, but it almost worked. At least it failed in a different way.
After some trial and error, I found that ultimately I had to do some xml-specific escaping. I wound up using a method out of SAX utilities, applied it to the story content as it was being returned from my custom method, and also to the string for that <figure> tag inside <content:encoded>.
from xml.sax.saxutils import escape ... def add_item_elements(self, handler, item): ... if item['content_encoded'] is not None: handler.startElement(u"content:encoded", {}) handler.characters(item['content_encoded']) # Suppose we have a photo to go along with this story if item['media'] is not None: figure = '<figure type="image/jpeg">' ... handler.characters(escape(figure)) handler.endElement(u"content:encoded") ... class YourNewFeed(YourFeed): def item_your_custom_field(self, item): ... return escape(full_text)
What that returns looks slightly uglier. But guess what? It validates.
from xml.sax.saxutils import escape from django.contrib.syndication.views import Feed from django.utils.feedgenerator import Rss201rev2Feed class ExtendedRSSFeed(Rss201rev2Feed): def root_attributes(self): attrs = super(ExtendedRSSFeed, self).root_attributes() attrs['xmlns:content'] = '' return attrs def add_item_elements(self, handler, item): super(ExtendedRSSFeed, self).add_item_elements(handler, item) if item['content_encoded'] is not None: handler.startElement(u"content:encoded", {}) handler.characters(item['content_encoded']) if item['media'] is not None: figure = '<figure type="image/jpeg">' figure += '<image src="%s" caption="%"></image>' % \ (item['media']['src'], item['media']['caption']) figure += '</image></figure>' handler.characters(escape(figure)) handler.endElement(u"content:encoded") ... class YourFeed(Feed): feed_type = ExtendedRSSFeed .... def item_extra_kwargs(self, item): extra = super(YourFeed, self).item_extra_kwargs(item) extra.update({'content_encoded': self.item_your_custom_field(item)}) extra.update({'media': self.item_your_custom_media_field(item)}) return extra def item_your_custom_field(self, item): return None def item_your_custom_media_field(self, item): return None class YourNewFeed(YourFeed): def item_your_custom_field(self, item): obj_id = item['my_item_id'] query_obj = MyStoryModel.objects.get(pk=obj_id) full_text = query_obj['full_story_content'] return escape(full_text) def item_your_custom_media_field(self, item): obj_id = item['my_item_id'] query_obj = MyStoryModel.objects.get(pk=obj_id) photo = query_obj['photo']['url'] caption = query_obj['photo']['caption'] return {'src': photo, 'caption': caption}
I, ... | http://www.mechanicalgirl.com/mobile/ | CC-MAIN-2015-40 | refinedweb | 3,165 | 55.24 |
Components and supplies
Necessary tools and machines
Apps and online services
About this project
I work at a golf store where we install golf grips. Putter grips, in particular, have a flat flange that needs to be correctly aligned with the putter face. Eyeballing the installation isn't the easiest thing to do and we often are asked to re-grip putters because the manufacturer didn't do it correctly to begin with. This project utilizes the output of an accelerometer, powered by an Arduino UNO, to make sure the grip is installed perfectly straight.
An external power source, as seen in the cover image, is powering the Arduino UNO. It's a 5 Volt, 1 amp battery. The variable resistor, seen in the schematic, controls how much voltage is allowed to pass through the LCD screen enabling the user to adjust the brightness of the output. The accelerometer is attached to a mini breadboard and on top of two strong magnets. These magnets hold this set-up on top of the grip as they attract to steel shaft of the golf club.
Code
Accelerometer/lcd screen codeArduino
#include <LiquidCrystal.h>// include the library code /**********************************************************/ //char array1[]=" "; //the string to print on the LCD //char array2[]="MONEY MAKER MIKE "; //the string to print on the LCD int tim = 250; //the value of delay time // initialize the library with the numbers of the interface pins LiquidCrystal lcd(4, 6, 10, 11, 12, 13); /; double xa; double xb; void setup(){ Serial.begin(300); lcd.begin(16, 2); } lcd.setCursor(0,0); lcd.print("x: "); lcd.print(xa); xa = -180+abs(x); xb = abs(xa); if (xa>178 || xb>178) //setting the tolerance { lcd.setCursor(0,2); lcd.print("STOP!"); delay(600); lcd.clear(); } /* lcd.print(" | y: "); lcd.print(y); lcd.print(" | z: "); lcd.println(z); */ delay(300);//just here to slow down the serial output - Easier to read }
Schematics
Author
Hunter Mitchell
- 1 project
- 0 followers
Published onMay 2, 2016
Members who respect this project
you might like | https://create.arduino.cc/projecthub/Hunt_Mitch/golf-grip-alignment-tool-b884a4 | CC-MAIN-2019-43 | refinedweb | 336 | 61.16 |
This step-by-step tutorial shows how to run Scaffolding Wizard and build a fully functional MVVM data-bound WPF application.
First of all, you need a WPF application with a Data Access Layer, based on which Scaffolding Wizad will generate the Model, View Model, and View layers. The Data Access Layer can be generated in different ways. Each approach is described in the following documentation section: Data Access Layer. There are three tutorials there. The result of each tutorial is a WPF project with a ready Data Access Layer. You can pass one of these tutorials or download one of the ready examples (for instance, this one).
Open the example with a Data Access Layer, build it, and follow the below steps.
There are three complete sample projects in the DevExpress Code Examples database at:
Right click the solution in the Solution Explorer and choose the Add DevExpress Item | New Item option.
In the Template Gallery window, find the WPF View Scaffolding section, choose a UI type to generate and click Run Wizard. See this topic for more details about supported UI types.
When the wizard is run, it looks for all data models in the current project and project references. Select the required Model and click Next.
If you do not see the created model, be sure to rebuild the project. In the next step of the wizard, you can select tables to include in the generated classes, and also make certain tables read only. After customizing these settings, click Finish and wait until the wizard generates the necessary files and references. If you have not yet rebuilt the project, do so now.
Look at the Views generated folder. In contains subfolders for each data table. Each subfolder contains a view for showing records in a table, and a view for editing a record in a table (if the table is not read only).
One view is not included in the subfolder. It is the main view, which aggregates the others. To show this view in the application, it is necessary to add this view to the MainWindow.
<Window x:Class="DevExpressWalkthrough.MainWindow"
xmlns:views="clr-namespace:DevExpressWalkthrough.Views"
...>
<Grid>
<views:NorthwindEntitiesView/>
</Grid>
</Window>
After that, you can run the application. | https://documentation.devexpress.com/WPF/115192/Scaffolding-Wizard/UI-Generation | CC-MAIN-2018-17 | refinedweb | 373 | 56.25 |
Item 3: Enforce the Singleton Property with a Private Constructor or an enum Type
A singleton is simply a class that is instantiated exactly once [Gamma95, p. 127]. Singletons typically represent a system component that is intrinsically unique, such as the window manager or file system. Making a class a singleton can make it difficult to test its clients, as it's impossible to substitute a mock implementation for a singleton unless it implements an interface that serves as its type.
Before release 1.5, there were two ways to implement singletons. Both are based on keeping the constructor private and exporting a public static member to provide access to the sole instance. In one approach, the member is a final field:
// Singleton with public final field public class Elvis { public static final Elvis INSTANCE = new Elvis(); private Elvis() { ... } public void leaveTheBuilding() { ... } }
The private constructor is called only once, to initialize the public static final field
Elvis.INSTANCE. The lack of a public or protected constructor guarantees a "monoelvistic" universe: exactly one Elvis instance will exist once the Elvis class is initialized -- no more, no less. Nothing that a client does can change this, with one caveat: a privileged client can invoke the private constructor reflectively (Item 53) with the aid of the
AccessibleObject.setAccessible method. If you need to defend against this attack, modify the constructor to make it throw an exception if it's asked to create a second instance.
In the second approach to implementing singletons, the (with the same caveat mentioned above).
The main advantage of the public field approach is that the declarations make it clear that the class is a singleton: the public static field is final, so it will always contain the same object reference. There is no longer any performance advantage to the public field approach: modern Java virtual machine (JVM) implementations are almost certain to inline the call to the static factory method.
One advantage of the factory-method approach is that it gives you the flexibility to change your mind about whether the class should be a singleton without changing its API. The factory method returns the sole instance but could easily be modified to return, say, a unique instance for each thread that invokes it. A second advantage, concerning generic types, is discussed in Item 27. Often neither of these advantages is relevant, and the final-field approach is simpler. (Item 77). Otherwise, each time a serialized instance is deserialized, a new instance will be created, leading, in the case of our example, to spurious Elvis sightings. To prevent this, add this readResolve method to the
Elvis class:
// readResolve method to preserve singleton property private Object readResolve() { // Return the one true Elvis and let the garbage collector // take care of the Elvis impersonator. return INSTANCE; }
As of release 1.5, there is a third approach to implementing singletons. Simply make an
enum type with one element:
// Enum singleton - the preferred approach public enum Elvis { INSTANCE; public void leaveTheBuilding() { ... } }
This approach is functionally equivalent to the public field approach, except that it is more concise, provides the serialization machinery for free, and provides an ironclad guarantee against multiple instantiation, even in the face of sophisticated serialization or reflection attacks. While this approach has yet to be widely adopted, a single-element
enum type is the best way to implement a singleton.
Joshua Bloch is Chief Java Architect at Google, and previously a Distinguished Engineer at Sun Microsystems.
Related Article
Creating and Destroying Java Objects: Part 2 | http://www.drdobbs.com/jvm/creating-and-destroying-java-objects-par/208403883?pgno=3 | CC-MAIN-2016-30 | refinedweb | 586 | 50.57 |
Simon Jefford - Home tag:sjjdev.com,2009:mephisto/ Mephisto Drax 2008-05-18T22:11:17Z simon tag:sjjdev.com,2008-05-18:9 2008-05-18T20:36:00Z 2008-05-18T22:11:17Z Gil - the "proper" release <p>I just pushed out <a href="/2008/5/12/gil">gil</a> as a gem onto rubyforge (my first gem - how exciting!) The magic incantation is of course:</p> <pre><code>sudo gem install gil </code></pre> <p>There is no functional difference between the gem and the script that I initially made available on github only. As I type this it hasn't made to the gem servers yet, but it should be on its merry way now.</p> <p>UPDATE</p> <p>Of course I forgot one major functional difference. I couldn't actually think of why anyone would want to pass all the lighthouse information as options on the command line so I removed that. Putting the information in .git/config is now the way to go - see the original <a href="/2008/5/12/gil">blog post</a> for more information. The gem itself is light on documentation at the moment.</p> simon tag:sjjdev.com,2008-05-14:6 2008-05-14T11:03:00Z 2008-05-14T11:13:07Z OpenSSH vulnerability in Debian (and variants) <p>This has been widely publicised - I found it on the <a href="">Github Blog</a> but you should also see the <a href="">official Debian announcement</a>.</p> <p>This is how I updated my Hardy slices:</p> <pre><code>sudo aptitude update sudo aptitude install openssh-client openssh-server </code></pre> <p>Trying to do a <code>aptitude safe-upgrade</code> told me that the openssh-client and openssh-server packages were going to be kept back for some reason so I fell back to explicitly upgrading. Say yes to the first solution that aptitude presents. A blue screen will eventually appear warning you that the machine keys are about to be upgraded. Say "OK". WARNING - this means the thumbprints in your known_hosts file or equivalent will not match which means your ssh client will warn you that the host may not be safe (or it may not let you connect at all IIRC).</p> <p>Next you need to regenerate any user keys that you generated on your system. I did this as follows:</p> <pre><code>rm ~/.ssh/id_rsa* ssh key-gen </code></pre> <p>Next run <code>ssh-vulnkey</code>. This will check all the keys in the usual locations - including those stored in your <code>authorized_keys</code> file. Any keys that come up as vulnerable need to be removed. Dud keys in <code>authorized_keys</code> probably indicate that there is a system that you use to connect to that box that needs to be patched and its keys upgraded.</p> <p>Please note, this may not be the best or most efficient way of sorting out this problem, it's just the way that worked for me.</p> simon tag:sjjdev.com,2008-05-12:5 2008-05-12T20:40:00Z 2008-05-12T20:42:16Z Using greasemonkey to link Lighthouse and Github <p>Next up is a simple <a href="">Greasemonkey script</a> (railslighthouselinker.js) that makes those Lighthouse hooks that <a href="2008/5/12/gil">Gil</a> finds so useful into links when viewing a commit in Github. The script can be easily customised to setup links from any github project to any lighthouse project. Just change the "includes" metadata and the lighthouse url in the body of the script.</p> simon tag:sjjdev.com,2008-05-12:2 2008-05-12T10:59:00Z 2008-05-12T15:53:18Z Gil <p>I figured it would be a good idea to start posting some of the scripts that I find useful - especially when doing Ruby / Rails coding. These scripts will end up in (where else) a <a href="">Github repo</a>.</p> <p>The first is <a href="">gil</a>, a handy script for generating changelogs from your git repo AND your <a href="">Lighthouse</a>. It was written to answer the question "what is going to be fixed or added in the next release?"</p> <p>Let me explain. You can get a list of changes by issuing a git-log command, like this:</p> <pre><code>git-log v2..HEAD </code></pre> <p>However, what you get from that may not be suitable to publish directly as a changelog, or to send a customer. This is where Lighthouse hooks come in handy. Assuming you have these hooks setup (the easiest way is to host on github) you can then associate a particular commit with a Lighthouse ticket (and mark it resolved all at the same time) by putting something like this in your commit message:</p> <pre><code>[#1 state:resolved] </code></pre> <p>What gil does is to pick those handy strings out of your commit history and to use the Lighthouse <a href="">API</a> to fetch details of the tickets. So if you were to run</p> <p><code>gil v2..HEAD --account=simonjefford --project=99999 --token=<your lighthouse beacon token></code></p> <p>you would end up with a list of Lighthouse tickets that were resolved by commits since you tagged v2. Which is pretty neat. But passing in all that lighthouse information each time you run gil is a PITA right? So, put that information in your .git/config file:</p> <pre><code>[gil] account=simonjefford project=99999 token=<your lighthouse beacon token> </code></pre> <p>Now you can just run <code>gil v2..HEAD</code>. Much nicer.</p> <p>If you use Capistrano, gil becomes even more useful if you get cap to automatically tag your deployments:</p> <pre><code>namespace :deploy do task :tag_release do system "git tag #{release_name}" end end after "deploy", "deploy:tag_release" </code></pre> <p>Assuming your deploy works OK you will end up with a tag named something like 20080508224225. Then you carry on coding, resolving lots of lighthouse tickets. To answer your boss when he asks "what's coming up in the next release" you can just run</p> <p>gil 20080508224225..HEAD</p> | http://feeds.feedburner.com/SimonJefford | crawl-002 | refinedweb | 1,034 | 61.97 |
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also
#include <sys/types.h> #include <dirent.h> int closedir(DIR *dirp);
The closedir() function closes the directory stream referred to by the argument dirp. Upon return, the value of dirp may no longer point to an accessible object of the type DIR. If a file descriptor is used to implement type DIR, that file descriptor will be closed.
Upon successful completion, closedir() returns 0. Otherwise, -1 is returned and errno is set to indicate the error.
The closedir() function may fail if:
The dirp argument does not refer to an open directory stream.
The closedir() function was interrupted by a signal.
See attributes(5) for descriptions of the following attributes:
opendir(3C), attributes(5), standards(5)
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also | http://docs.oracle.com/cd/E19082-01/819-2243/closedir-3c/index.html | CC-MAIN-2016-40 | refinedweb | 134 | 58.69 |
DEBSOURCES
Skip Quicknav
sources / kdbg / 2.0.4
Version 2.0.4 ().
Version 2.0.3
Fixed parsing of gdb output that mentions "operator<<", "operator>>",
"operator<", and "operator>" within text delimited by angle brackets <>.
This fixes a crash when any such function is disassembled and other
misbehaviors.
Fixed parsing stack frames that mention "operator<<" or "operator<".
Thanks to Charles Samuels, who pointed out the problem and provided
an initial fix.
Version 2.0.2
Fixed stack display for functions in an anonymous namespace and
for functions whose names involve template parameter lists (thanks to
André Wöbbeking).
Fixed environment list which would add back the entry from the edit box
even if it was just deleted.
Fixed that the Run/Continue button was enabled while the program was
running.
Fixed parsing of NaN (Not a Number) floating point values.
Version 2.0.1
Updated Hungarian translation (thanks to Tamas Szanto).
Worked around gdb 6.3 crashes at "info line main" command (thanks to
Stefan Taferner).
Updated XSLT debugger parser for xsldbg >= 3.4.0 (by Keith Isdale).
Version 2.0 environment variable list
are selected.
Added a command line option to attach to a process (thanks to
Matthew Allen for the initial code).
Fixed the "Using host libthread_db" error message properly.
Fixed inappropriate icon sizes.
Version 1.9.4
Updated the build system to the latest auto* tools.
Worked around the problem that gdb reports "Using host libthread_db"
on Fedora Core when it processes the file command.
Version 1.9.3
Improved editing of values; it is now possible to edit variables also
in the watch window.
Version 1.9.2
The previous security fix only protects against accidents, not attacks,
as Matt Zimmerman pointed out. Did it right this time.
Basic editing of values in the local variables window is available.
More refinements are still necessary.
Version 1.9.1
Fixed security flaw regarding the program specific debugger command.
Configurable key bindings.
Version 1.9.0
Program arguments that are file names can be browsed for.
Added XSLT debugging (using xsldbg) by Keith Isdale.
The program counter can be changed via point and click.
Improved register formating by Daniel Kristjansson.
"Orphaned breakpoints", i.e. breakpoints that gdb cannot set
immediately, can be set. This helps debug shared libraries and
dynamically loaded modules.
Version 1.2.10
Fixed the "Using host libthread_db" error message.
Fixed inappropriate icon sizes.
Version 1.2.9
The previous security fix only protects against accidents, not attacks,
as Matt Zimmerman pointed out. Did it right this time.
Version 1.2.8
Fixed security flaw regarding the program specific debugger command.
Version 1.2.7
Fixed parsing of stack frames for recent gdbs.
Support vector registers (thanks to Daniel Thor Kristjansson for
initial code).
Work around bug in some gdbs which inhibits printing of QString values.
Version 1.2.6
Opening the Find dialog no longer toggles a breakpoint.
Make mouse wheel work (again) in source, variables, and watch windows.
When a pointer to a struct is expanded the struct is also expanded.
Improved toolbar and application icons.
Version 1.2.5
Now compiles for KDE 3.
Fixed make install for builddir != srcdir.
Fixed status bar flicker. This gives a nice speed-up by a factor of 4
when the contents of an array of 50 QStrings are displayed!
Version 1.2.4.
Version 1.2.3
Fixed invisible toolbar under KDE 2.x (really, this time, I promise).
Fixed crash when no line has the cursor (empty files).
Don't display a blank page when a non-existing file was tried to open.
Version 1.2.2
Fixed a special, but common case where removing a breakpoint didn't
work but add more on the same line instead (thanks to Ron Lerech).
Fixed invisible toolbar under KDE 2.1.2 (thanks to Neil Butterworth).
Fixed compilation for gcc 3.0 (thanks to Ben Burton):
Fixed make install if srcdir != builddir.
Changed encoding of German translations (and also Danish, Italian,
Norwegian, Romanian, Slovak, Swedish) to UTF-8, which fixes message
strings under KDE2 (at least for German - couldn't test the others).
Version 1.2.1
Working directory can be browsed for.
Added context menu to move the selected expression from the local
variables window to the watch window.
Fixed crash when environment variables are removed.
Fixed problems with trailing backslashes in watched expressions.
Fixed compilation on FreeBSD (openpty).
Version 1.2.0
Translations for: Hungarian, Japanese, Norwegian (Nynorsk), Serbian,
Turkish
Updated the User's Manual (English, Russian (thanks, Ilmar!), German).
Version 1.1.7beta1
Improved the program icon; made the installation more KDE2 compliant.
Enabled mouse wheel scrolling at various places.
Version 1.1.6
Added memory display.
Single-stepping by instruction.
Watchpoints. Finally! (On Linux/i386 works best with gdb 5!)
Version 1.1.5
Made Delete key work in the watch window.
Breakpoints can be enabled and disabled in the breakpoint list.
Detach from debugged program on exit (and when new program is debugged).
Added a list of recently opened executables (thanks to
Thomas Sparr <thomas.sparr@kreatel.se>).
Version 1.1.4
Fixed endless loop on shutdown.
Brought in line with KDE 1.91 (KDE 2 beta).
Version 1.1.3
Debugging of multi-threaded programs. Requires a gdb that supports
multi-threaded programs, like gdb 5.
Debugger window pops into the foreground when the program stops.
Made tab width a user-settable option.
Version 1.1.2
Display disassembled code.
Version 1.1.1
Use the KDE system fixed font for the source code window.
By default, do not log communication with gdb.
Added an integrated output window (based on code by Judin Max).
Program specific settings can be set. In particular: the debugger
command (required if you are debugging remote devices), the
terminal emulation needed for the program.
Verison 1.1.0
Use docking windows thanks to Judin Max <novaprint@mtu-net.ru>.
Added a register dump window. Based on code by Judin Max.
Implemented "balloons" (tool tips) that show variable values.
./configure fix for NetBSD thanks to
Berndt Josef Wulf <wulf@ping.net.au>.
There's now a Swedish translation thanks to
�rjan Lindbergh <orjan.lindbergh@telia.com>.
Version 1.0.2
Save and restore watched expressions.
More adjustments for the KRASH release.
Show <repeat...> count in QStrings like in normal C strings instead
of repeating the characters.
Use QListView instead of KTabListBox.
Version 1.0.1
Added a hack to set a remote target. Thanks to
Johnny Chan <johnnykc@iprg.nokia.com>.
Display function arguments. Based on suggestions by Johnny Chan.
KDE 2 fixes.
Support builddir != srcdir.
Version 1.0.0
Brought up-to-date for latest KDE 2.
Version 1.0beta3
Removal of minor misfeatures.
Prepared for KDE 2 and Qt 2 (it's a configure option:
--with-kde-version=2).
Added Russian documentation (thanks to
Ilmar S. Habibulin <ilmar@ints.ru>) and German documentation.
There is now a Spanish translation thanks to
Manuel Soriano <manu@europa3.com>.
Version 1.0beta2
Recognize strings with repeated characters: 'x' <repeats 34 times>.
Fix structs with no (additional) data members and other fixes
for gdb 4.18.
Save window size across sessions.
There is now an Italian translation thanks to
Massimo Morin <mmorin@schedsys.com>.
Version 1.0beta1
Fixed non-displaying QString (Qt2) with certain gdb 4.17's (at least
mine here, of SuSE 6.1, had a problem :-)
Fixed cases where gdb commands where executed after debuggee has exited.
Do not execute gdb commands after an interrupt.
Updated some translations. Still most are incomplete. Please help!
There is now a Polish translation thanks to
Jacek Wojdel <wojdel@kbs.twi.tudelft.nl>.
Version 0.3.1
The working directory for the program being debugged can be set
(Execution|Arguments).
There's now a global options dialog in place (File|Global Options).
At the moment the debugger program (which must be gdb, but it could be
an experimental gdb version, for example) and the terminal for program
output can be specified.
Fixed Makefiles to support make DESTDIR=/tmp/foo install (which is
needed by packagers and to create relocatable RPMs).
There's now a Danish translation thanks to
Steen Rabol <rabol@get2net.dk>.
Version 0.3.0
Starting with this version, Qt 1.42 and KDE 1.1 is required.
Ported to Qt 2.0 and KDE post-1.1! KDbg now runs with both
KDE 1.1 (using Qt 1.42) and the latest experimental KDE. You can of
course run one version and debug programs written for the other version.
KDbg can now display Qt 2.0's QString values (which are Unicode
strings)!
Environment variables can be set. Changes become effective the next time
the program being debugged is run.
The breakpoint list has been improved. It disables command buttons at
times when it is not possible to change breakpoints. The icons that
show the breakpoint status are now the same as those in the source
window.
Popup menus (context menus) for frequently used commands have been added
to the source code window (thanks to Tom Nguyen <ttomnguyen@yahoo.com>)
There's now a Russian translation thanks to
Ilmar Habibulin <ilmar@ints.ru>.
Internal restructuring. These changes are invisible. They just make
future extensions less cumbersome.
Version 0.2.5
This is the last version that supports Qt 1.33 and KDE 1.0.
There's now a Czech translation thanks to
Martin Spirk <spirk@kla.pvt.cz>.
Recognize and report when gdb dies unexpectedly. This happens commonly
when writing CORBA programs since gdb obviously has problems in
debugging C++ classes with virtual base classes.
Added conditional breakpoints and ignore counts.
Version 0.2.4
Added a toolbar button to load the executable. The button to open a
source file is still there. I hope it's clear which one does what.
Attaching to a running process is now possible (Execution|Attach).
Made more visible when gdb is busy using a gear wheel in the upper right
corner of the window like kfm.
Made the KTreeView widget more flexible by adding a bunch of virtual
keywords. (No, this doesn't have any influence on the look and feel of
KDbg.) While doing that, I fixed a small repainting bug.
ChangeLog starts here. | https://sources.debian.org/src/kdbg/2.0.4-3/ChangeLog/ | CC-MAIN-2021-25 | refinedweb | 1,704 | 61.73 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.